markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
We typically combine the first two lines into one expression like this:
age = int(input("Enter your age: ")) type(age)
content/lessons/02-Variables/LAB-Variables.ipynb
IST256/learn-python
mit
1.1 You Code: Debuging The following program has errors in it. Your task is to fix the errors so that: your age can be input and convertred to an integer. the program outputs your age and your age next year. For example: Enter your age: 45 Today you are 45 next year you will be 46
# TODO: Debug this code age = input("Enter your age: ") nextage = age + 1 print("Today you are age next year you will be {nextage}")
content/lessons/02-Variables/LAB-Variables.ipynb
IST256/learn-python
mit
Format Codes Python has some string format codes which allow us to control the output of our variables. %s = format variable as str %d = format variable as int %f = format variable as float You can also include the number of spaces to use for example %5.2f prints a float with 5 spaces 2 to the right of the decimal point.
name = "Mike" age = 45 gpa = 3.4 print("%s is %d years old. His gpa is %.3f" % (name, age,gpa))
content/lessons/02-Variables/LAB-Variables.ipynb
IST256/learn-python
mit
Formatting with F-Strings The other method of formatting data in Python is F-strings. As we saw in the last lab, F-strings use interpolation to specify the variables we would like to print in-line with the print string. You can format an f-string {var:d} formats var as integer {var:f} formats var as float {var:.3f} formats var as float to 3 decimal places. Example:
name ="Mike" wage = 15 print(f"{name} makes ${wage:.2f} per hour")
content/lessons/02-Variables/LAB-Variables.ipynb
IST256/learn-python
mit
1.2 You Code Re-write the program from (1.1 You Code) so that the print statement uses format codes. Remember: do not copy code, as practice, re-write it.
# TODO: Write code here
content/lessons/02-Variables/LAB-Variables.ipynb
IST256/learn-python
mit
1.3 You Code Use F-strings or format codes to Print the PI variable out 3 times. Once as a string, Once as an int, and Once as a float to 4 decimal places.
#TODO: Write Code Here
content/lessons/02-Variables/LAB-Variables.ipynb
IST256/learn-python
mit
Putting it all together: Fred's Fence Estimator Fred's Fence has hired you to write a program to estimate the cost of their fencing projects. For a given length and width you will calculate the number of 6 foot fence sections, and the total cost of the project. Each fence section costs $23.95. Assume the posts and labor are free. Program Inputs: Length of yard in feet Width of yard in feet Program Outputs: Perimeter of yard ( 2 x (Length + Width)) Number of fence sections required (Permiemer divided by 6 ) Total cost for fence ( fence sections multiplied by $23.95 ) NOTE: All outputs should be formatted to 2 decimal places: e.g. 123.05 ``` TODO: 1. Input length of yard as float, assign to a variable 2. Input Width of yard as float, assign to a variable 3. Calculate perimeter of yard, assign to a variable 4. calculate number of fence sections, assign to a variable 5. calculate total cost, assign to variable 6. print perimeter of yard 7. print number of fence sections 8. print total cost for fence. ``` 1.4 You Code Based on the provided TODO, write the program in python in the cell below. Your solution should have 8 lines of code, one for each TODO. HINT: Don't try to write the program in one sitting. Instead write a line of code, run it, verify it works and fix any issues with it before writing the next line of code.
# TODO: Write your code here
content/lessons/02-Variables/LAB-Variables.ipynb
IST256/learn-python
mit
Definition of the Pohlhausen profile The Pohlhausen profile is used to generalize the flat plate profile to the case of curved boundaries or flows with external pressure graidients. The profile is defined as $$ \frac u U = F(\eta)+\lambda G(\eta) , \quad \eta=\frac y\delta$$ where $$ F = 2\eta-2\eta^3+\eta^4 $$ $$ G = \frac\eta 6 (1-\eta)^3 $$ $$ \lambda = \frac {\delta^2}\nu \frac{dU}{dx} $$ This can be easly defined using a set of python functions
def pohlF(eta): return 2*eta-2*eta**3+eta**4 def pohlG(eta): return eta/6*(1-eta)**3 def pohl(eta,lam): return pohlF(eta)+lam*pohlG(eta)
lessons/Pohlhausen.ipynb
ultiyuan/test0
gpl-2.0
Let's take a look at how this changes with $\lambda$.
def pohlPlot(lam): pyplot.figure(figsize=(size,size)) pyplot.xlabel('u/U', fontsize=16) pyplot.axis([-0.1,1.1,0,1]) pyplot.ylabel('y/del', fontsize=16) eta = numpy.linspace(0.0,1.0,100) pyplot.plot(pohlF(eta),eta, lw=1, color='black') pyplot.plot(pohl(eta,lam),eta, lw=2, color='green')
lessons/Pohlhausen.ipynb
ultiyuan/test0
gpl-2.0
Change lam below to see how the profile changes compared to the flat plate value.
pohlPlot(lam=7)
lessons/Pohlhausen.ipynb
ultiyuan/test0
gpl-2.0
Loading a single rate The Rate class holds a single reaction rate and takes a reaclib file as input. There are a lot of methods in the Rate class that allow you to explore the rate.
c13pg = pyrl.Rate("c13-pg-n14-nacr")
examples/pynucastro-examples.ipynb
pyreaclib/pyreaclib
bsd-3-clause
the original reaclib source we can easily see the original source from ReacLib
print(c13pg.original_source)
examples/pynucastro-examples.ipynb
pyreaclib/pyreaclib
bsd-3-clause
evaluate the rate at a given temperature (in K) This is just the temperature dependent portion of the rate, usually expressed as $N_A \langle \sigma v \rangle$
c13pg.eval(1.e9)
examples/pynucastro-examples.ipynb
pyreaclib/pyreaclib
bsd-3-clause
human readable string We can print out a string describing the rate, and the nuclei involved
print(c13pg)
examples/pynucastro-examples.ipynb
pyreaclib/pyreaclib
bsd-3-clause
The nuclei involved are all Nucleus objects. They have members Z and N that give the proton and neutron number
print(c13pg.reactants) print(c13pg.products) r2 = c13pg.reactants[1]
examples/pynucastro-examples.ipynb
pyreaclib/pyreaclib
bsd-3-clause
Note that each of the nuclei are a pynucastro Nucleus type
type(r2) print(r2.Z, r2.N)
examples/pynucastro-examples.ipynb
pyreaclib/pyreaclib
bsd-3-clause
temperature sensitivity We can find the temperature sensitivity about some reference temperature. This is the exponent when we write the rate as $$r = r_0 \left ( \frac{T}{T_0} \right )^\nu$$. We can estimate this given a reference temperature, $T_0$
c13pg.get_rate_exponent(2.e7)
examples/pynucastro-examples.ipynb
pyreaclib/pyreaclib
bsd-3-clause
plot the rate's temperature dependence A reaction rate has a complex temperature dependence that is defined in the reaclib files. The plot() method will plot this for us
c13pg.plot()
examples/pynucastro-examples.ipynb
pyreaclib/pyreaclib
bsd-3-clause
A rate also knows its density dependence -- this is inferred from the reactants in the rate description and is used to construct the terms needed to write a reaction network. Note: since we want reaction rates per gram, this number is 1 less than the number of nuclei
c13pg.dens_exp
examples/pynucastro-examples.ipynb
pyreaclib/pyreaclib
bsd-3-clause
Working with a group of rates A RateCollection() class allows us to work with a group of rates. This is used to explore their relationship. Other classes (introduced soon) are built on this and will allow us to output network code directly.
files = ["c12-pg-n13-ls09", "c13-pg-n14-nacr", "n13--c13-wc12", "n13-pg-o14-lg06", "n14-pg-o15-im05", "n15-pa-c12-nacr", "o14--n14-wc12", "o15--n15-wc12"] rc = pyrl.RateCollection(files)
examples/pynucastro-examples.ipynb
pyreaclib/pyreaclib
bsd-3-clause
Printing a rate collection shows all the rates
print(rc)
examples/pynucastro-examples.ipynb
pyreaclib/pyreaclib
bsd-3-clause
More detailed information is provided by network_overview()
print(rc.network_overview())
examples/pynucastro-examples.ipynb
pyreaclib/pyreaclib
bsd-3-clause
show a network diagram We visualize the network using NetworkX
rc.plot()
examples/pynucastro-examples.ipynb
pyreaclib/pyreaclib
bsd-3-clause
Explore the network's rates To evaluate the rates, we need a composition
comp = pyrl.Composition(rc.get_nuclei()) comp.set_solar_like()
examples/pynucastro-examples.ipynb
pyreaclib/pyreaclib
bsd-3-clause
Plot nuclides on a grid Nuclides in a network may also be visualized as cells on a grid of Z vs. N, colored by some quantity. This can be more interpretable for large networks. Calling gridplot without any arguments will just plot the grid - to see anything interesting we need to supply some conditions. Here is a plot of nuclide mass fraction on a log scale, with a 36 square inch figure:
rc.gridplot(comp=comp, color_field="X", scale="log", area=36)
examples/pynucastro-examples.ipynb
pyreaclib/pyreaclib
bsd-3-clause
Unlike the network plot, this won't omit hydrogen and helium by default. To just look at the heavier nuclides, we can define a function to filter by proton number:
ff = lambda nuc: nuc.Z > 2 rc.gridplot(comp=comp, rho=1e4, T=1e8, color_field="activity", scale="log", filter_function=ff, area=20, cmap="Blues")
examples/pynucastro-examples.ipynb
pyreaclib/pyreaclib
bsd-3-clause
Integrating networks If we don't just want to explore the network interactively in a notebook, but want to output code to run integrate it, we need to create one of PythonNetwork or StarKillerNetwork
pynet = pyrl.PythonNetwork(files)
examples/pynucastro-examples.ipynb
pyreaclib/pyreaclib
bsd-3-clause
A network knows how to express the terms that make up the function (in the right programming language). For instance, you can get the term for the ${}^{13}\mathrm{C} (p,\gamma) {}^{14}\mathrm{N}$ rate as:
print(pynet.ydot_string(c13pg))
examples/pynucastro-examples.ipynb
pyreaclib/pyreaclib
bsd-3-clause
and the code needed to evaluate that rate (the T-dependent part) as:
print(pynet.function_string(c13pg))
examples/pynucastro-examples.ipynb
pyreaclib/pyreaclib
bsd-3-clause
The write_network() method will output the python code needed to define the RHS of a network for integration with the SciPy integrators
pynet.write_network("cno_test_integrate.py") %cat cno_test_integrate.py
examples/pynucastro-examples.ipynb
pyreaclib/pyreaclib
bsd-3-clause
We can now import the network that was just created and integrate it using the SciPy ODE solvers
import cno_test_integrate as cno from scipy.integrate import solve_ivp import numpy as np
examples/pynucastro-examples.ipynb
pyreaclib/pyreaclib
bsd-3-clause
A network plot
import matplotlib.pyplot as plt fig = plt.figure() ax = fig.add_subplot(111) for n in range(cno.nnuc): ax.loglog(sol.t, sol.y[n,:] * cno.A[n], label=f"X({n})") ax.set_xlim(1.e10, 1.e20) ax.set_ylim(1.e-8, 1.0) ax.legend(fontsize="small") fig.set_size_inches((10, 8))
examples/pynucastro-examples.ipynb
pyreaclib/pyreaclib
bsd-3-clause
Backpropagation In the next cells, you will be asked to complete functions for the Jacobian of the cost function with respect to the weights and biases. We will start with layer 3, which is the easiest, and work backwards through the layers. We'll define our Jacobians as, $$ \mathbf{J}{\mathbf{W}^{(3)}} = \frac{\partial C}{\partial \mathbf{W}^{(3)}} $$ $$ \mathbf{J}{\mathbf{b}^{(3)}} = \frac{\partial C}{\partial \mathbf{b}^{(3)}} $$ etc., where $C$ is the average cost function over the training set. i.e., $$ C = \frac{1}{N}\sum_k C_k $$ You calculated the following in the practice quizzes, $$ \frac{\partial C}{\partial \mathbf{W}^{(3)}} = \frac{\partial C}{\partial \mathbf{a}^{(3)}} \frac{\partial \mathbf{a}^{(3)}}{\partial \mathbf{z}^{(3)}} \frac{\partial \mathbf{z}^{(3)}}{\partial \mathbf{W}^{(3)}} ,$$ for the weight, and similarly for the bias, $$ \frac{\partial C}{\partial \mathbf{b}^{(3)}} = \frac{\partial C}{\partial \mathbf{a}^{(3)}} \frac{\partial \mathbf{a}^{(3)}}{\partial \mathbf{z}^{(3)}} \frac{\partial \mathbf{z}^{(3)}}{\partial \mathbf{b}^{(3)}} .$$ With the partial derivatives taking the form, $$ \frac{\partial C}{\partial \mathbf{a}^{(3)}} = 2(\mathbf{a}^{(3)} - \mathbf{y}) $$ $$ \frac{\partial \mathbf{a}^{(3)}}{\partial \mathbf{z}^{(3)}} = \sigma'({z}^{(3)})$$ $$ \frac{\partial \mathbf{z}^{(3)}}{\partial \mathbf{W}^{(3)}} = \mathbf{a}^{(2)}$$ $$ \frac{\partial \mathbf{z}^{(3)}}{\partial \mathbf{b}^{(3)}} = 1$$ We'll do the J_W3 ($\mathbf{J}_{\mathbf{W}^{(3)}}$) function for you, so you can see how it works. You should then be able to adapt the J_b3 function, with help, yourself.
# GRADED FUNCTION # Jacobian for the third layer weights. There is no need to edit this function. def J_W3 (x, y) : # First get all the activations and weighted sums at each layer of the network. a0, z1, a1, z2, a2, z3, a3 = network_function(x) # We'll use the variable J to store parts of our result as we go along, updating it in each line. # Firstly, we calculate dC/da3, using the expressions above. J = 2 * (a3 - y) # Next multiply the result we've calculated by the derivative of sigma, evaluated at z3. J = J * d_sigma(z3) # Then we take the dot product (along the axis that holds the training examples) with the final partial derivative, # i.e. dz3/dW3 = a2 # and divide by the number of training examples, for the average over all training examples. J = J @ a2.T / x.size # Finally return the result out of the function. return J # In this function, you will implement the jacobian for the bias. # As you will see from the partial derivatives, only the last partial derivative is different. # The first two partial derivatives are the same as previously. # ===YOU SHOULD EDIT THIS FUNCTION=== def J_b3 (x, y) : # As last time, we'll first set up the activations. a0, z1, a1, z2, a2, z3, a3 = network_function(x) # Next you should implement the first two partial derivatives of the Jacobian. # ===COPY TWO LINES FROM THE PREVIOUS FUNCTION TO SET UP THE FIRST TWO JACOBIAN TERMS=== J = 2 * (a3 - y) J = J * d_sigma(z3) # For the final line, we don't need to multiply by dz3/db3, because that is multiplying by 1. # We still need to sum over all training examples however. # There is no need to edit this line. J = np.sum(J, axis=1, keepdims=True) / x.size return J
Backpropagation.ipynb
hktxt/MachineLearning
gpl-3.0
We'll next do the Jacobian for the Layer 2. The partial derivatives for this are, $$ \frac{\partial C}{\partial \mathbf{W}^{(2)}} = \frac{\partial C}{\partial \mathbf{a}^{(3)}} \left( \frac{\partial \mathbf{a}^{(3)}}{\partial \mathbf{a}^{(2)}} \right) \frac{\partial \mathbf{a}^{(2)}}{\partial \mathbf{z}^{(2)}} \frac{\partial \mathbf{z}^{(2)}}{\partial \mathbf{W}^{(2)}} ,$$ $$ \frac{\partial C}{\partial \mathbf{b}^{(2)}} = \frac{\partial C}{\partial \mathbf{a}^{(3)}} \left( \frac{\partial \mathbf{a}^{(3)}}{\partial \mathbf{a}^{(2)}} \right) \frac{\partial \mathbf{a}^{(2)}}{\partial \mathbf{z}^{(2)}} \frac{\partial \mathbf{z}^{(2)}}{\partial \mathbf{b}^{(2)}} .$$ This is very similar to the previous layer, with two exceptions: * There is a new partial derivative, in parentheses, $\frac{\partial \mathbf{a}^{(3)}}{\partial \mathbf{a}^{(2)}}$ * The terms after the parentheses are now one layer lower. Recall the new partial derivative takes the following form, $$ \frac{\partial \mathbf{a}^{(3)}}{\partial \mathbf{a}^{(2)}} = \frac{\partial \mathbf{a}^{(3)}}{\partial \mathbf{z}^{(3)}} \frac{\partial \mathbf{z}^{(3)}}{\partial \mathbf{a}^{(2)}} = \sigma'(\mathbf{z}^{(3)}) \mathbf{W}^{(3)} $$ To show how this changes things, we will implement the Jacobian for the weight again and ask you to implement it for the bias.
# GRADED FUNCTION # Compare this function to J_W3 to see how it changes. # There is no need to edit this function. def J_W2 (x, y) : #The first two lines are identical to in J_W3. a0, z1, a1, z2, a2, z3, a3 = network_function(x) J = 2 * (a3 - y) # the next two lines implement da3/da2, first σ' and then W3. J = J * d_sigma(z3) J = (J.T @ W3).T # then the final lines are the same as in J_W3 but with the layer number bumped down. J = J * d_sigma(z2) J = J @ a1.T / x.size return J # As previously, fill in all the incomplete lines. # ===YOU SHOULD EDIT THIS FUNCTION=== def J_b2 (x, y) : a0, z1, a1, z2, a2, z3, a3 = network_function(x) J = 2 * (a3 - y) J = J * d_sigma(z3) J = (J.T @ W3).T J = J * d_sigma(z2) J = np.sum(J, axis=1, keepdims=True) / x.size return J
Backpropagation.ipynb
hktxt/MachineLearning
gpl-3.0
Layer 1 is very similar to Layer 2, but with an addition partial derivative term. $$ \frac{\partial C}{\partial \mathbf{W}^{(1)}} = \frac{\partial C}{\partial \mathbf{a}^{(3)}} \left( \frac{\partial \mathbf{a}^{(3)}}{\partial \mathbf{a}^{(2)}} \frac{\partial \mathbf{a}^{(2)}}{\partial \mathbf{a}^{(1)}} \right) \frac{\partial \mathbf{a}^{(1)}}{\partial \mathbf{z}^{(1)}} \frac{\partial \mathbf{z}^{(1)}}{\partial \mathbf{W}^{(1)}} ,$$ $$ \frac{\partial C}{\partial \mathbf{b}^{(1)}} = \frac{\partial C}{\partial \mathbf{a}^{(3)}} \left( \frac{\partial \mathbf{a}^{(3)}}{\partial \mathbf{a}^{(2)}} \frac{\partial \mathbf{a}^{(2)}}{\partial \mathbf{a}^{(1)}} \right) \frac{\partial \mathbf{a}^{(1)}}{\partial \mathbf{z}^{(1)}} \frac{\partial \mathbf{z}^{(1)}}{\partial \mathbf{b}^{(1)}} .$$ You should be able to adapt lines from the previous cells to complete both the weight and bias Jacobian.
# GRADED FUNCTION # Fill in all incomplete lines. # ===YOU SHOULD EDIT THIS FUNCTION=== def J_W1 (x, y) : a0, z1, a1, z2, a2, z3, a3 = network_function(x) J = 2 * (a3 - y) J = J * d_sigma(z3) J = (J.T @ W3).T J = J * d_sigma(z2) J = (J.T @ W2).T J = J * d_sigma(z1) J = J @ a0.T / x.size return J # Fill in all incomplete lines. # ===YOU SHOULD EDIT THIS FUNCTION=== def J_b1 (x, y) : a0, z1, a1, z2, a2, z3, a3 = network_function(x) J = 2 * (a3 - y) J = J * d_sigma(z3) J = (J.T @ W3).T J = J * d_sigma(z2) J = (J.T @ W2).T J = J * d_sigma(z1) J = np.sum(J, axis=1, keepdims=True) / x.size return J
Backpropagation.ipynb
hktxt/MachineLearning
gpl-3.0
Test your code before submission To test the code you've written above, run all previous cells (select each cell, then press the play button [ ▶| ] or press shift-enter). You can then use the code below to test out your function. You don't need to submit these cells; you can edit and run them as much as you like. First, we generate training data, and generate a network with randomly assigned weights and biases.
x, y = training_data() reset_network()
Backpropagation.ipynb
hktxt/MachineLearning
gpl-3.0
Next, if you've implemented the assignment correctly, the following code will iterate through a steepest descent algorithm using the Jacobians you have calculated. The function will plot the training data (in green), and your neural network solutions in pink for each iteration, and orange for the last output. It takes about 50,000 iterations to train this network. We can split this up though - 10,000 iterations should take about a minute to run. Run the line below as many times as you like.
plot_training(x, y, iterations=50000, aggression=7, noise=1)
Backpropagation.ipynb
hktxt/MachineLearning
gpl-3.0
위 함수를 이용하여 동전을 30번 던져서 앞면이 정확히 24번 나올 확률을 계산할 수 있다.
binom_distribution(30, 24, 1/2.)
previous/y2017/W08-numpy-hypothesis_test/.ipynb_checkpoints/GongSu19-Statistical_Hypothesis_Test-checkpoint.ipynb
liganega/Gongsu-DataSci
gpl-3.0
앞서 확률적으로 0.0007%, 즉 10,000번에 7번 정도 나와야 한다고 이론적으로 계산한 결과와 비슷한 결과가 나온다는 것을 위 실험을 반복적으로 실행하면서 확인해볼 수 있다. 정상적인 동전인가? 모의실험의 결과 역시 동전을 30번 던져서 24번 이상 앞면이 나올 확률이 5%에 크게 미치지 못한다. 이런 경우 우리는 사용한 동전이 정상적인 동전이라는 영가설(H0)을 받아들일 수 없다고 말한다. 즉, 기각해야 한다. 가설검정을 위해 지금까지 다룬 내용을 정리하면 다음과 같다. 가설검정 6단계 1) 검정할 가설을 결정한다. * 영가설: 여기서는 "정상적인 동전이다" 라는 가설 사용 2) 가설을 검증할 때 사용할 통계방식을 선택한다. * 기서는 이항분포 확률 선택 3) 기각역을 정한다. * 여기서는 앞면이 나올 횟수를 기준으로 상위 5%로 정함 * 앞면이 24번 나올 확률이 5% 이상되어야 인정한다는 의미임. 4) 검정통계를 위한 p-값을 찾는다. * 여기서는 모의실험을 이용하여 가설에 사용된 사건이 발생한 확률을 계산. * 경우에 따라 이론적으로도 계산 가능 5) 표본결과가 기각역 안에 들어오는지 확인한다. * 여기서는 5% 이하인지 확인 6) 결정을 내린다. * 여기서는 "정상적인 동전이다" 라는 영가설을 기각함. 연습문제 연습 모의실험 반복을 구현하는 coin_experiment 함수를 for문을 사용하지 않고 구현해보자. 견본답안:
def coin_experiment_2(num_repeat): experiment = np.random.randint(0,2,[num_repeat, num_tosses]) return experiment.sum(axis=1) heads_count = coin_experiment_2(100000) sns.distplot(heads_count, kde=False) mask = heads_count>=24 heads_count[mask].shape[0]/100000
previous/y2017/W08-numpy-hypothesis_test/.ipynb_checkpoints/GongSu19-Statistical_Hypothesis_Test-checkpoint.ipynb
liganega/Gongsu-DataSci
gpl-3.0
Train We're now ready to train our model. If you run into OOM (out of memory) issues during training, try to decrease the batch_size.
checkpoint = "checkpoints/best_model.ckpt" # Split data to training and validation sets train_source = source_letter_ids[batch_size:] train_target = target_letter_ids[batch_size:] valid_source = source_letter_ids[:batch_size] valid_target = target_letter_ids[:batch_size] (valid_targets_batch, valid_sources_batch, valid_targets_lengths, valid_sources_lengths) = next(get_batches(valid_target, valid_source, batch_size, source_letter_to_int['<PAD>'], target_letter_to_int['<PAD>'])) display_step = 20 # Check training loss after every 20 batches with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(1, epochs+1): for batch_i, (targets_batch, sources_batch, targets_lengths, sources_lengths) in enumerate( get_batches(train_target, train_source, batch_size, source_letter_to_int['<PAD>'], target_letter_to_int['<PAD>'])): # Training step _, loss = sess.run( [train_op, cost], {input_data: sources_batch, targets: targets_batch, lr: learning_rate, target_sequence_length: targets_lengths, source_sequence_length: sources_lengths}) # Debug message updating us on the status of the training if batch_i % display_step == 0 and batch_i > 0: # Calculate validation cost validation_loss = sess.run( [cost], {input_data: valid_sources_batch, targets: valid_targets_batch, lr: learning_rate, target_sequence_length: valid_targets_lengths, source_sequence_length: valid_sources_lengths}) print('Epoch {:>3}/{} Batch {:>4}/{} - Loss: {:>6.3f} - Validation loss: {:>6.3f}' .format(epoch_i, epochs, batch_i, len(train_source) // batch_size, loss, validation_loss[0])) # Save Model saver = tf.train.Saver() saver.save(sess, checkpoint) print('Model Trained and Saved')
seq2seq/sequence_to_sequence_implementation.ipynb
swirlingsand/deep-learning-foundations
mit
Prediction
def source_to_seq(text): '''Prepare the text for the model''' sequence_length = 7 return [source_letter_to_int.get(word, source_letter_to_int['<UNK>']) for word in text]+ [source_letter_to_int['<PAD>']]*(sequence_length-len(text)) input_sentence = 'ih' text = source_to_seq(input_sentence) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(checkpoint + '.meta') loader.restore(sess, checkpoint) input_data = loaded_graph.get_tensor_by_name('input:0') logits = loaded_graph.get_tensor_by_name('predictions:0') source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0') target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0') #Multiply by batch_size to match the model's input parameters answer_logits = sess.run(logits, {input_data: [text]*batch_size, target_sequence_length: [len(text)]*batch_size, source_sequence_length: [len(text)]*batch_size})[0] pad = source_letter_to_int["<PAD>"] print('Original Text:', input_sentence) print('\nSource') print(' Word Ids: {}'.format([i for i in text])) print(' Input Words: {}'.format(" ".join([source_int_to_letter[i] for i in text]))) print('\nTarget') print(' Word Ids: {}'.format([i for i in answer_logits if i != pad])) print(' Response Words: {}'.format(" ".join([target_int_to_letter[i] for i in answer_logits if i != pad])))
seq2seq/sequence_to_sequence_implementation.ipynb
swirlingsand/deep-learning-foundations
mit
Analyzing Income Distribution The American Community Survey (published by the US Census) annually reports the number of individuals in a given income bracket at the state level. We can use this information, stored in Data Commons, to visualize disparity in income for each state in the US. Our goal for this tutorial will be to generate a plot that visualizes the total number of individuals across a given set of income brackets for a given state. Before we begin, we'll setup our notebook.
# Import the Data Commons Pandas library import datacommons_pandas as dc # Import other libraries import pandas as pd import matplotlib.pyplot as plt import numpy as np
notebooks/analyzing_income_distribution.ipynb
datacommonsorg/api-python
apache-2.0
Getting the Data The Data Commons graph identifies 16 different income brackets. The list of these variable can be found under the "Household" category in our list of StatisticalVariables. We can use get_places_in to get all states within the United States. We can then call build_multivariate_dataframe on the list of states to get a dataframe per-income bracket population counts for each state.
states = dc.get_places_in(['country/USA'], 'State')['country/USA'] # A list of income bracket StatisticalVariables income_brackets = [ "Count_Household_IncomeOfUpto10000USDollar", "Count_Household_IncomeOf10000To14999USDollar", "Count_Household_IncomeOf15000To19999USDollar", "Count_Household_IncomeOf20000To24999USDollar", "Count_Household_IncomeOf25000To29999USDollar", "Count_Household_IncomeOf30000To34999USDollar", "Count_Household_IncomeOf35000To39999USDollar", "Count_Household_IncomeOf40000To44999USDollar", "Count_Household_IncomeOf45000To49999USDollar", "Count_Household_IncomeOf50000To59999USDollar", "Count_Household_IncomeOf60000To74999USDollar", "Count_Household_IncomeOf75000To99999USDollar", "Count_Household_IncomeOf100000To124999USDollar", "Count_Household_IncomeOf125000To149999USDollar", "Count_Household_IncomeOf150000To199999USDollar", "Count_Household_IncomeOf200000OrMoreUSDollar", ] data = dc.build_multivariate_dataframe(states, income_brackets) data.head()
notebooks/analyzing_income_distribution.ipynb
datacommonsorg/api-python
apache-2.0
To get the names of states, we can use the get_property_values function:
# Get all state names and store it in a column "name" # Get the first name, if there are multiple for a state data.insert(0, 'name', data.index.map(dc.get_property_values(data.index, 'name')).str[0]) data.head(5)
notebooks/analyzing_income_distribution.ipynb
datacommonsorg/api-python
apache-2.0
Analyzing the Data Let's plot our data as a histogram. Notice that the income ranges as tabulated by the US Census are not equal. At the low end, the range is 0-9999, whereas, towards the top, the range 150,000-199,999 is five times as broad! We will make the width of each of the columns correspond to their range, and will give us an idea of the total earnings, not just the number of people in that group. First we provide code for generating the plot.
# Bar chart endpoints (for calculating bar width) label_to_range = { "Count_Household_IncomeOfUpto10000USDollar": [0, 9999], "Count_Household_IncomeOf10000To14999USDollar": [10000, 14999], "Count_Household_IncomeOf15000To19999USDollar": [15000, 19999], "Count_Household_IncomeOf20000To24999USDollar": [20000, 24999], "Count_Household_IncomeOf25000To29999USDollar": [25000, 29999], "Count_Household_IncomeOf30000To34999USDollar": [30000, 34999], "Count_Household_IncomeOf35000To39999USDollar": [35000, 39999], "Count_Household_IncomeOf40000To44999USDollar": [40000, 44999], "Count_Household_IncomeOf45000To49999USDollar": [45000, 49999], "Count_Household_IncomeOf50000To59999USDollar": [50000, 59999], "Count_Household_IncomeOf60000To74999USDollar": [60000, 74999], "Count_Household_IncomeOf75000To99999USDollar": [75000, 99999], "Count_Household_IncomeOf100000To124999USDollar": [100000, 124999], "Count_Household_IncomeOf125000To149999USDollar": [125000, 149999], "Count_Household_IncomeOf150000To199999USDollar": [150000, 199999], "Count_Household_IncomeOf200000OrMoreUSDollar": [200000, 300000], } def plot_income(data, state_name): # Assert that specified "state_name" is a valid state name frame_search = data.loc[data['name'] == state_name].squeeze() if frame_search.shape[0] == 0: print('{} does not have sufficient income data to generate the plot!'.format(state_name)) return # Print the resulting series data = frame_search[1:] # Calculate the lengths without intervals widths_without_interval = [] for bracket in income_brackets: r = label_to_range[bracket] widths_without_interval.append(int((r[1] - r[0]) / 18)) # Calculate the x-axis positions pos, total = [], 0 for l in widths_without_interval: pos.append(total + (l // 2)) total += l # Calculate the bar lengths widths = [] for bracket in income_brackets: r = label_to_range[bracket] # 50 here to be the intervals between bars widths.append(int((r[1] - r[0]) / 18 - 50)) # Plot the histogram plt.figure(figsize=(12, 10)) plt.xticks(pos, income_brackets, rotation=90) plt.grid(True) plt.bar(pos, data.values, widths, color='b', alpha=0.3) # Return the resulting frame. return frame_search
notebooks/analyzing_income_distribution.ipynb
datacommonsorg/api-python
apache-2.0
We can then call this code with a state to plot the income bracket sizes.
#@title Enter State to plot { run: "auto" } state_name = "Idaho" #@param ["Missouri", "Arkansas", "Arizona", "Ohio", "Connecticut", "Vermont", "Illinois", "South Dakota", "Iowa", "Oklahoma", "Kansas", "Washington", "Oregon", "Hawaii", "Minnesota", "Idaho", "Alaska", "Colorado", "Delaware", "Alabama", "North Dakota", "Michigan", "California", "Indiana", "Kentucky", "Nebraska", "Louisiana", "New Jersey", "Rhode Island", "Utah", "Nevada", "South Carolina", "Wisconsin", "New York", "North Carolina", "New Hampshire", "Georgia", "Pennsylvania", "West Virginia", "Maine", "Mississippi", "Montana", "Tennessee", "New Mexico", "Massachusetts", "Wyoming", "Maryland", "Florida", "Texas", "Virginia"] result = plot_income(data, state_name) # Show the plot plt.show()
notebooks/analyzing_income_distribution.ipynb
datacommonsorg/api-python
apache-2.0
and we can display the raw table of values.
# Additionally print the table of income bracket sizes result
notebooks/analyzing_income_distribution.ipynb
datacommonsorg/api-python
apache-2.0
Introduction to TensorFlow Part 2 - Debugging and Control Flow <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://colab.research.google.com/github/google/tf-quant-finance/blob/master/tf_quant_finance/examples/jupyter_notebooks/Introduction_to_TensorFlow_Part_2_-_Debugging_and_Control_Flow.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/google/tf-quant-finance/blob/master/tf_quant_finance/examples/jupyter_notebooks/Introduction_to_TensorFlow_Part_2_-_Debugging_and_Control_Flow.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> </table>
#@title Upgrade to TensorFlow 2.5+ !pip install --upgrade tensorflow #@title Install and import Libraries for this colab. RUN ME FIRST! !pip install matplotlib import matplotlib.pyplot as plt import tensorflow as tf from tensorflow.python.summary.writer.writer import FileWriter %load_ext tensorboard
tf_quant_finance/examples/jupyter_notebooks/Introduction_to_TensorFlow_Part_2_-_Debugging_and_Control_Flow.ipynb
google/tf-quant-finance
apache-2.0
What this notebook covers This notebook carries on from part 1, and covers the basics of control flow and debugging in tensorflow: * various debugging aids * loops * conditionals These features are used in graph mode of inside a tf.function. For simplicity eager execution is disabled throughout the training. Debugging tf.py_function Full documentation This allows you to wrap a python function as an op. The function can make further calls into tensorflow, allowing e.g. a subset of tensorflow operations to be wrapped up inside the function and inspected using pdb. There are various restrictions involved with this op and * serializing execution graphs * executing across distributed machines Read the documentation for more information. As such, this should be viewed as more of a debugging tool, and its use should be avoided in performance-sensitive code.
def plus_one(x): print("input has type %s, value %s"%(type(x), x)) output = x + 1.0 print("output has type %s, value %s"%(type(output), output)) return output # Let us create a graph where `plus_one` is invoked during the graph contruction g = tf.Graph() with g.as_default(): x = tf.constant([1.0,2.0,3.0]) # Notice that print statemets are not called during the graph construction a = tf.py_function(plus_one, inp = [x], Tout=tf.float32) with tf.compat.v1.Session(graph=g) as sess: # During the runtime, input `x` is passed as an EagerTensor to `plus_one` print(sess.run(a))
tf_quant_finance/examples/jupyter_notebooks/Introduction_to_TensorFlow_Part_2_-_Debugging_and_Control_Flow.ipynb
google/tf-quant-finance
apache-2.0
tf.print Full Documentation The tf.print op is another useful debugging tool. It takes any number of tensors and python objects, and prints them to stdout. There are a few optional parameters, to control formatting and where it prints to. See the documentation for details.
# Define a TensorFlow function @tf.function def print_fn(x): # Note that `print_trace` is a TensorFlow Op. See the next section for details print_trace = tf.print( "`input` has value", x, ", type", type(x), "and shape", tf.shape(x)) # Create some inputs a = tf.constant([1, 2]) # Call the function print_fn(a)
tf_quant_finance/examples/jupyter_notebooks/Introduction_to_TensorFlow_Part_2_-_Debugging_and_Control_Flow.ipynb
google/tf-quant-finance
apache-2.0
If you're using eager execution mode, that's all you need to know. For deferred execution however there are some significant complications that we'll discuss in the next section. tf.control_dependencies Full Documentation One of the easiest optimisations tensorflow makes when in deferred mode is to eliminate unused ops. So if we run this:
g = tf.Graph() with g.as_default(): a = tf.constant(1) + tf.constant(1) print_trace = tf.print("a is set to ", a) b = a * 2 with tf.compat.v1.Session(graph=g) as sess: results = sess.run(b)
tf_quant_finance/examples/jupyter_notebooks/Introduction_to_TensorFlow_Part_2_-_Debugging_and_Control_Flow.ipynb
google/tf-quant-finance
apache-2.0
Then we don't get any output. Nothing depends on print_trace (in fact nothing can depend on it: tf.print doesn't return anything to depend on), so it gets dropped from the graph before execution occurs. If you want print_trace to be evaluated, then you need to ask for it explicitly:
g = tf.Graph() with g.as_default(): a = tf.constant(1) + tf.constant(1) print_trace = tf.compat.v1.print("a is set to", a) b = a * 2 with tf.compat.v1.Session(graph=g) as sess: results = sess.run((b, print_trace))
tf_quant_finance/examples/jupyter_notebooks/Introduction_to_TensorFlow_Part_2_-_Debugging_and_Control_Flow.ipynb
google/tf-quant-finance
apache-2.0
That's fine for our noddy sample above. But obviously has problems as your graph grows larger or the sess.run method gets further removed from the graph definition. The solution for that is tf.control_dependencies. This signals to tensorflow that the given set of prerequisite ops must be evaluated before a set of dependent ops.
g = tf.Graph() with g.as_default(): a = tf.constant(1) + tf.constant(1) print_trace = tf.print("a is set to", a) hello_world = tf.print("hello world") with tf.control_dependencies((print_trace, hello_world)): # print_trace and hello_world will always be evaluated # before b can be evaluated b = a * 2 c = a * 3 with tf.compat.v1.Session(graph=g) as sess: results = sess.run(b)
tf_quant_finance/examples/jupyter_notebooks/Introduction_to_TensorFlow_Part_2_-_Debugging_and_Control_Flow.ipynb
google/tf-quant-finance
apache-2.0
Note that if all of the dependent ops are pruned from the dependency tree and thus not evaluated, then the prerequisites will not be evaluated either: e.g. if we call sess.run(c) in the example above,then print_trace and hello_world won't be evaluated
# Nothing gets printed with tf.compat.v1.Session(graph=g) as sess: results = sess.run(c)
tf_quant_finance/examples/jupyter_notebooks/Introduction_to_TensorFlow_Part_2_-_Debugging_and_Control_Flow.ipynb
google/tf-quant-finance
apache-2.0
tf.debugging.Assert Full Documentation In addition to tf.print, the other common use of control_dependencies is tf.debugging.Assert. This op does what you'd expect: checks a boolean condition and aborts execution with a InvalidArgumentError if the condition is not true. Just like tf.print, it is likely to be pruned from the dependency tree and ignored if run in deferred execution mode without control_dependencies
g = tf.Graph() with g.as_default(): x = tf.compat.v1.placeholder(tf.float32, shape=[]) with tf.control_dependencies([ tf.debugging.Assert(tf.not_equal(x, 0), ["Invalid value for x:",x])]): y = 2.0 / x with tf.compat.v1.Session(graph=g) as sess: try: results = sess.run(y, feed_dict={x: 0.0}) except tf.errors.InvalidArgumentError as e: print('Value of x is zero\nError message:') print(e.message)
tf_quant_finance/examples/jupyter_notebooks/Introduction_to_TensorFlow_Part_2_-_Debugging_and_Control_Flow.ipynb
google/tf-quant-finance
apache-2.0
There are also a bunch of helper methods, such as * assert_equal * assert_positive * assert_rank_at_least * etc. to simplify common uses of tf.Assert So our sample above could have been written as
g = tf.Graph() with g.as_default(): x = tf.compat.v1.placeholder(tf.float32, shape=[]) with tf.control_dependencies([tf.debugging.assert_none_equal(x, 0.0)]): y = 2.0 / x with tf.compat.v1.Session(graph=g) as sess: try: results = sess.run(y, feed_dict={x: 0.0}) except tf.errors.InvalidArgumentError as e: print('Value of x is zero\nError message:') print(e.message)
tf_quant_finance/examples/jupyter_notebooks/Introduction_to_TensorFlow_Part_2_-_Debugging_and_Control_Flow.ipynb
google/tf-quant-finance
apache-2.0
Control Flow tf.cond Full Documentation The cond operand is the TensorFlow equivalent of if-else. It takes * a condition, which must resolve down to a single scalar boolean * true_fn: a python callable to generate one or more tensors that will be evaluated if the condition is true * false_fn: same as true_fn, but the resulting tensors will only be evaluated if the the condition is false The condition is evaluated, and if its result is true then, the tensors generated by true_fn are evaluated and those generated by false_fn are abandoned (and vice versa if the result is false). Note that true_fn and false_fn must be python functions (or lambdas), not just tensors:
# This won't work try: tf.cond(tf.constant(True), tf.constant(1), tf.constant(2)) except TypeError as e: pass # You need a callable: tf.cond(tf.constant(True), lambda: tf.constant(1), lambda: tf.constant(2))
tf_quant_finance/examples/jupyter_notebooks/Introduction_to_TensorFlow_Part_2_-_Debugging_and_Control_Flow.ipynb
google/tf-quant-finance
apache-2.0
The exact order of execution is a little complicated * true_fn and false_fn are executed just once, when the graph is being built. * the tensors created by true_fn (if the condition is true) or false_fn (if the condition is false) will be evaluated stricly after the condition has been evaluated * the tensors created by false_fn (if the condition is true) or true_fn (if the condition is false) will never be evaluated * any tensors depended on by the tensors generated by true_fn or false_fn will be always evaluated, even regardless of what the condition evaluates to
def dependency_fn(): print ("DEPENDENCY: I'm always evaluated at execution time because I'm a dependency\n") return tf.constant(2) dependency = tf.py_function(dependency_fn, inp=[], Tout=tf.int32) def true_op_fn(): print ("TRUE_OP_FN: I'm evaluated at execution time because condition is True\n") return 1 def true_fn(): print ("TRUE_FN: I'm evaluated at graph building time") return tf.py_function(true_op_fn, inp=[], Tout=tf.int32) def false_op_fn(input): print ("FALSE_OP_FN: I'm never evaluated because condition isn't False\n") return 1 + input def false_fn(): print ("FALSE_FN: I'm evaluated at graph building time") return tf.py_function(false_op_fn, inp=[dependency], Tout=tf.int32) def predicate_fn(): print("\n****** Executing the graph") print("PREDICATE: I'm evaluated at execution time\n") return tf.constant(True) @tf.function def test_fn(): print("****** Building graph") tf.cond(tf.py_function(predicate_fn, inp=[], Tout=tf.bool), true_fn, false_fn) test_fn()
tf_quant_finance/examples/jupyter_notebooks/Introduction_to_TensorFlow_Part_2_-_Debugging_and_Control_Flow.ipynb
google/tf-quant-finance
apache-2.0
tf.while_loop Full Documentation This is one of the more complicated ops in TensorFlow, so we'll take things step by step. The most important parameter is loop_vars. This is a tuple/list of tensors. Next up is cond this is a python callable that should take the same number of arguments as loop_vars contains, and returns a single boolean. The third important parameter is body. This is a python callable that should take the same number of arguments as loop_vars contains, and returns a tuple/list of values with the same size as loop_vars, and whose members are of the same type/arity/shape as those in loop_vars. Note that like the trun_fn and false_fn parameters discussed in tf.cond above, body and cond are callables, that are called once during graph definition. To a first level approximation, the behaviour is then roughtly akin to the following pseudo-code: python working_vars = loop_vars while(cond(*working_vars)): working_vars = body(*working_vars) return working_vars There are optional complications: * By default, each loop variable must have exactly the same shape/size/arity in all iterations. If you don't want that (e.g. because you want to increase the size in a particular dimnension by 1 each iteration), then you can use shape_invariants to loosen the checks. * maximum_iterations can be used to put an upper bound on number of times the loop is executed, even if cond still returns true The documentation contains some warnings about parallel executions, race conditions and variables getting out of sync. To understand these, we need to beyond the first level approximation above. Consider the following: ```python index = tf.constant(1) accumulator = tf.constant(0) loop = tf.while_loop( loop_vars=[index, accumulator], cond = lambda idx, acc: idx < 4, body = lambda idx, acc: [idx+1, acc + idx] ) ``` To a second level approximation, this is equivalent to ```python index_initial = tf.constant(1) accumalator_initial = tf.constant(0) index_iteration_1 = tf.add(index_initial, 1) index_iteration_2 = tf.add(index_iteration_1, 1) index_iteration_3 = tf.add(index_iteration_2, 1) accumulator_iteration_1 = tf.add( accumalator_initial, index_initial) accumulator_iteration_2 = tf.add( accumulator_iteration_1, index_iteration_1) accumulator_iteration_3 = tf.add( accumulator_iteration_2, index_iteration_2) loop = [index_iteration_3, accumulator_iteration_3] ``` (for the full, unapproximated, gory details of the graph, run the code below
g = tf.Graph() with g.as_default(): index = tf.constant(1) accumulator = tf.constant(0) loop = tf.while_loop( loop_vars=[index, accumulator], cond = lambda idx, acc: idx < 4, body = lambda idx, acc: [idx+1, acc + idx] ) with tf.compat.v1.Session() as sess: with FileWriter("logs", sess.graph): results = sess.run(loop) # Graph visualization %tensorboard --logdir logs
tf_quant_finance/examples/jupyter_notebooks/Introduction_to_TensorFlow_Part_2_-_Debugging_and_Control_Flow.ipynb
google/tf-quant-finance
apache-2.0
but our second-level approximation will do for now). Notice how index_iteration_3 doesn't depend the accumulator values at all. Thus assuming our accumulator tensors were doing something more complicated than adding two integers, and assuming we were running on hardware with plenty of execution units, then it's possible that index_iteration_3 could be fully calculated in one thead, while accumulator_iteration_1 is still being calculated in another. It usually doesn't matter, because loop depends on both index_iteration_3 and accuulator_iteration_3, so it and any dependencies can't start their evaluation before all the accumulation steps have completed. But if you're depending on side effects, or clever operations depending on global state that tensorflow is unware of (e.g. in custom kernels, or py_function ops), then it's something to be aware of. You can use the while_loop's parallel_iterations parameter to restrict the number of iterations that can be calulated in parallel if this does become an issue. Autograph to deal with for-loop and if-statement One can be tempted to use Python for-loops and if-statements and expect TensorFlow to correctly map them to control flow. Autograph is a utlitity that comes with tf.function and allows to treat Python loops and conditionals as TensorFlow control flow ops. Autograph is enabled by default and is capable of converting Python logiv to a TF code. See more details on the official TF page here.
# First let us explicitly disable Autograph @tf.function(autograph=False) def loop_fn(index, max_iterations): for index in range(max_iterations): index += 1 if index == 4: tf.print('index is equal to 4') return index # Create some inputs index = tf.constant(0) max_iterations = tf.constant(5) # Try calling the loop try: loop_fn(index, max_iterations) except TypeError as e: print(e) # Autograph is enabled by default @tf.function def loop_fn(index, max_iterations): for index in range(max_iterations): index += 1 if index == 4: tf.print('index is equal to 4') return index # Note that Autograph sucessfully converted Python code to TF graph print(loop_fn(index, max_iterations))
tf_quant_finance/examples/jupyter_notebooks/Introduction_to_TensorFlow_Part_2_-_Debugging_and_Control_Flow.ipynb
google/tf-quant-finance
apache-2.0
Exercise: Mandlebrot set Recall that the Mandelbrot set is defined as the set of values of $c$ in the complex plane such that the recursion: $z_{n+1} = z_{n}^2 + c$ does not diverge. It is known that all such values of $c$ lie inside the circle of radius $2$ around the origin. So what we'll do is Create a 2-d tensor c_values ranging from -1-1i to 1+1i Create a matching 2-d tensor Z_values with initial values of 0 Create a third 2-d tensor diverged_after which contains the iteration number that the matching Z_value's absolute value was > 2 (or MAX_ITERATIONS, if it always stayed below 2) Update the above using a while_loop display the final values of diverged_after as an image to see the famous shape
MAX_ITERATIONS = 64 NUM_PIXELS = 512 def GenerateGrid(nX, nY, bottom_left=(-1.0, -1.0), top_right=(1.0, 1.0)): """Generates a complex matrix of shape [nX, nY]. Generates an evenly spaced grid of complex numbers spanning the rectangle between the supplied diagonal points. Args: nX: A positive integer. The number of points in the horizontal direction. nY: A positive integer. The number of points in the vertical direction. bottom_left: The coordinates of the bottom left corner of the rectangle to cover. top_right: The coordinates of the top right corner of the rectangle to cover. Returns: A constant tensor of type complex64 and shape [nX, nY]. """ x = tf.linspace(bottom_left[0], top_right[0], nX) y = tf.linspace(bottom_left[1], top_right[1], nY) real, imag = tf.meshgrid(x, y) return tf.cast(tf.complex(real, imag), tf.complex128) c_values = GenerateGrid(NUM_PIXELS, NUM_PIXELS) initial_Z_values = tf.zeros_like(c_values, dtype=tf.complex128) initial_diverged_after = tf.ones_like(c_values, dtype=tf.int32) * MAX_ITERATIONS # You need to put the various values you want to change inside the loop here loop_vars = () # this needs to take the same number of arguments as loop_vars contains and # return a tuple of equal size with the next iteration's values def body(): # hint: tf.abs will give the magnitude of a complex value return () # this just needs to take the same number of arguments as loop_vars contains and # return true (we'll use maximum_iterations to exit the loop) def cond(): return True results = tf.while_loop( loop_vars=loop_vars, body = body, cond = cond, maximum_iterations=MAX_ITERATIONS) ## extract the final value of diverged_after from the tuple final_diverged_after = results[-1] plt.matshow(final_diverged_after) pass #@title Solution: Mandlebrot set (Double-click to reveal) MAX_ITERATIONS = 64 NUM_PIXELS = 512 def GenerateGrid(nX, nY, bottom_left=(-1.0, -1.0), top_right=(1.0, 1.0)): """Generates a complex matrix of shape [nX, nY]. Generates an evenly spaced grid of complex numbers spanning the rectangle between the supplied diagonal points. Args: nX: A positive integer. The number of points in the horizontal direction. nY: A positive integer. The number of points in the vertical direction. bottom_left: The coordinates of the bottom left corner of the rectangle to cover. top_right: The coordinates of the top right corner of the rectangle to cover. Returns: A constant tensor of type complex64 and shape [nX, nY]. """ x = tf.linspace(bottom_left[0], top_right[0], nX) y = tf.linspace(bottom_left[1], top_right[1], nY) real, imag = tf.meshgrid(x, y) return tf.cast(tf.complex(real, imag), tf.complex128) c_values = GenerateGrid(NUM_PIXELS, NUM_PIXELS) initial_Z_values = tf.zeros_like(c_values, dtype=tf.complex128) initial_diverged_after = tf.ones_like(c_values, dtype=tf.int32) * MAX_ITERATIONS # You need to put the various values you want to change inside the loop here loop_vars = (0, initial_Z_values, initial_diverged_after) # this needs to take the same number of arguments as loop_vars contains and # return a tuple of equal size with the next iteration's values def body(iteration_count, Z_values, diverged_after): new_Z_values = Z_values * Z_values + c_values has_diverged = tf.abs(new_Z_values) > 2.0 new_diverged_after = tf.minimum(diverged_after, tf.where( has_diverged, iteration_count, MAX_ITERATIONS)) return (iteration_count+1, new_Z_values, new_diverged_after) # this just needs to take the same number of arguments as loop_vars contains and # return true (we'll use maximum_iterations to exit the loop) def cond(iteration_count, Z_values, diverged_after): return True results = tf.while_loop( loop_vars=loop_vars, body = body, cond = cond, maximum_iterations=MAX_ITERATIONS) ## extract the final value of diverged_after from the tuple final_diverged_after = results[-1] plt.matshow(final_diverged_after) plt.show()
tf_quant_finance/examples/jupyter_notebooks/Introduction_to_TensorFlow_Part_2_-_Debugging_and_Control_Flow.ipynb
google/tf-quant-finance
apache-2.0
Настоящие сложные грамматики для топонимов собраны в репозитории <a href="https://github.com/natasha/natasha">Natasha</a>. Найти подстроку в тексте не достаточно, нужно разбить её на поля и нормализовать. Например, из фразы "12 марта по приказу президента Владимира Путина ...", извлечём объект Person(position='президент', Name(first='Владимир', last='Путин')).
from yargy import Parser from yargy.predicates import gram from yargy.pipelines import morph_pipeline from yargy.interpretation import fact from IPython.display import display Person = fact( 'Person', ['position', 'name'] ) Name = fact( 'Name', ['first', 'last'] ) POSITION = morph_pipeline([ 'премьер министр', 'президент' ]) NAME = rule( gram('Name').interpretation( Name.first.inflected() ), gram('Surn').interpretation( Name.last.inflected() ) ).interpretation( Name ) PERSON = rule( POSITION.interpretation( Person.position.inflected() ), NAME.interpretation( Person.name ) ).interpretation( Person ) parser = Parser(PERSON) text = ''' 12 марта по приказу президента Владимира Путина ... ''' for match in parser.findall(text): display(match.fact)
docs/index.ipynb
bureaucratic-labs/yargy
mit
Грамматики для имён собраны в репозитории Natasha Токенизатор Парсер работает с последовательностью токенов. Встроенный в Yargy токенизатор простой и предсказуемый:
from yargy.tokenizer import MorphTokenizer tokenizer = MorphTokenizer() text = '''Ростов-на-Дону Длительностью 18ч. 10мин. Яндекс.Такси π ≈ 3.1415 1 500 000$ http://vk.com ''' for line in text.splitlines(): print([_.value for _ in tokenizer(line)])
docs/index.ipynb
bureaucratic-labs/yargy
mit
Для каждого токена Pymorph2 возвращает набор граммем. Например, "NOUN, sing, femn" — "существительное в единственном числе женского рода". Полный список в <a href="https://pymorphy2.readthedocs.io/en/latest/user/grammemes.html">документации Pymorph2</a>. Вне контекста слово имеет несколько вариантов разбора. Например, "стали" — глагол (VERB) во фразе "мы стали лучше" и существительное (NOUN) в "марки стали":
tokenizer = MorphTokenizer() list(tokenizer('марки стали'))
docs/index.ipynb
bureaucratic-labs/yargy
mit
Токенизатор работает на правилах. В <a href="ref.ipynb#Токенизатор">справочнике</a> показано, как менять стандартные правила и добавлять новые. Предикаты Предикат принимает токен, возвращает True или False. В Yargy встроен <a href="ref.ipynb#Предикаты">набор готовых предикатов</a>. Операторы and_, or_ и not_ комбинируют предикаты:
from yargy import and_, not_ from yargy.tokenizer import MorphTokenizer from yargy.predicates import is_capitalized, eq tokenizer = MorphTokenizer() token = next(tokenizer('Стали')) predicate = is_capitalized() assert predicate(token) == True predicate = and_( is_capitalized(), not_(eq('марки')) ) assert predicate(token) == True
docs/index.ipynb
bureaucratic-labs/yargy
mit
<a href="ref.ipynb#predicates.custom">custom</a> создаёт предикат из произвольной функции. Например, предикат для римских цифр:
from pymorphy2.shapes import is_roman_number from yargy.parser import Context from yargy.tokenizer import Tokenizer from yargy.predicates import custom tokenizer = Tokenizer() token = next(tokenizer('XL')) predicate = custom(is_roman_number, types='LATIN') predicate = predicate.activate(Context(tokenizer)) # проверяется, что tokenizer поддерживает тип 'LATIN' assert predicate(token) == True token = next(tokenizer('XS')) assert predicate(token) == False
docs/index.ipynb
bureaucratic-labs/yargy
mit
Газеттир Газеттир работает с последовательностью слов. Например, вместо:
from yargy import or_, rule from yargy.predicates import normalized RULE = or_( rule(normalized('dvd'), '-', normalized('диск')), rule(normalized('видео'), normalized('файл')) )
docs/index.ipynb
bureaucratic-labs/yargy
mit
удобно использовать morph_pipeline:
from yargy import Parser from yargy.pipelines import morph_pipeline RULE = morph_pipeline([ 'dvd-диск', 'видео файл', 'видеофильм', 'газета', 'электронный дневник', 'эссе', ]) parser = Parser(RULE) text = 'Видео файл на dvd-диске' for match in parser.findall(text): print([_.value for _ in match.tokens])
docs/index.ipynb
bureaucratic-labs/yargy
mit
Список газеттиров в <a href="ref.ipynb#Газеттир">справочнике</a>. Грамматики В Yargy контекстно-свободная грамматика описывается конструкциями Python. Например, традиционная запись грамматики размеров одежды: KEY -&gt; р. | размер VALUE -&gt; S | M | L SIZE -&gt; KEY VALUE Так она выглядит в Yargy:
from yargy import rule, or_ KEY = or_( rule('р', '.'), rule('размер') ).named('KEY') VALUE = or_( rule('S'), rule('M'), rule('L'), ).named('VALUE') SIZE = rule( KEY, VALUE ).named('SIZE') SIZE.normalized.as_bnf
docs/index.ipynb
bureaucratic-labs/yargy
mit
В Yargy терминал грамматики — предикат. Используем встроенный предикат in_, сократим запись VALUE:
from yargy.predicates import in_ VALUE = rule( in_('SML') ).named('VALUE') SIZE = rule( KEY, VALUE ).named('SIZE') SIZE.normalized.as_bnf
docs/index.ipynb
bureaucratic-labs/yargy
mit
Как быть, когда правая часть правила ссылается на левую? Например: EXPR -&gt; a | ( EXPR + EXPR ) В Python нельзя использовать необъявленные переменные. Для рекурсивных правил, есть конструкция forward:
from yargy import forward EXPR = forward() EXPR.define(or_( rule('a'), rule('(', EXPR, '+', EXPR, ')') ).named('EXPR')) EXPR.normalized.as_bnf
docs/index.ipynb
bureaucratic-labs/yargy
mit
Рекурсивные правила описывают последовательности токенов произвольной длины. Грамматика для текста в кавычках:
from yargy import not_ from yargy.predicates import eq WORD = not_(eq('»')) TEXT = forward() TEXT.define(or_( rule(WORD), rule(WORD, TEXT) )) TITLE = rule( '«', TEXT, '»' ).named('TITLE') TITLE.normalized.as_bnf
docs/index.ipynb
bureaucratic-labs/yargy
mit
Для удобства в Yargy есть метод repeatable с ним запись короче. Библиотека автоматически добавит forward:
TITLE = rule( '«', not_(eq('»')).repeatable(), '»' ).named('TITLE') TITLE.normalized.as_bnf
docs/index.ipynb
bureaucratic-labs/yargy
mit
Парсер У парсера есть два метода: findall и match. findall находит все непересекающиеся подстроки, которые удовлетворяют грамматике:
parser = Parser( or_( PERSON, TITLE ) ) text = 'Президент Владимир Путин в фильме «Интервью с Путиным» ..' for match in parser.findall(text): print([_.value for _ in match.tokens])
docs/index.ipynb
bureaucratic-labs/yargy
mit
match — пытается разобрать весь текст целиком:
match = parser.match('Президент Владимир Путин') print([_.value for _ in match.tokens]) match = parser.match('Президент Владимир Путин 25 мая') print(match)
docs/index.ipynb
bureaucratic-labs/yargy
mit
Интерпретация Результат работы парсера — это дерево разбора. Грамматика и деревья разбора для дат:
from IPython.display import display from yargy.predicates import ( lte, gte, dictionary ) MONTHS = { 'январь', 'февраль', 'март', 'апрель', 'мая', 'июнь', 'июль', 'август', 'сентябрь', 'октябрь', 'ноябрь', 'декабрь' } MONTH_NAME = dictionary(MONTHS) MONTH = and_( gte(1), lte(12) ) DAY = and_( gte(1), lte(31) ) YEAR = and_( gte(1900), lte(2100) ) DATE = or_( rule(DAY, MONTH_NAME, YEAR), rule(YEAR, '-', MONTH, '-', DAY), rule(YEAR, 'г', '.') ).named('DATE') parser = Parser(DATE) text = '''2015г. 18 июля 2016 2016-01-02 ''' for line in text.splitlines(): match = parser.match(line) display(match.tree.as_dot)
docs/index.ipynb
bureaucratic-labs/yargy
mit
Интерпретация — процесс преобразования дерева разбора в объект с набором полей. Для даты, например, нужно получить структуры вида Date(year=2016, month=1, day=2). Пользователь размечает дерево на вершины-атрибуты и вершины-конструкторы методом interpretation:
from yargy.interpretation import fact Date = fact( 'Date', ['year', 'month', 'day'] ) DATE = or_( rule( DAY.interpretation( Date.day ), MONTH_NAME.interpretation( Date.month ), YEAR.interpretation( Date.year ) ), rule( YEAR.interpretation( Date.year ), '-', MONTH.interpretation( Date.month ), '-', DAY.interpretation( Date.day ) ), rule( YEAR.interpretation( Date.year ), 'г', '.' ) ).interpretation( Date ).named('DATE') parser = Parser(DATE) for line in text.splitlines(): match = parser.match(line) display(match.tree.as_dot)
docs/index.ipynb
bureaucratic-labs/yargy
mit
Из размеченного дерева библиотека собирает объект:
for line in text.splitlines(): match = parser.match(line) display(match.fact)
docs/index.ipynb
bureaucratic-labs/yargy
mit
Подробнее об интерпретации в <a href="#ref.ipynb#Интерпретация">справочнике</a>. Нормализация Содержание полей фактов нужно нормировать. Например, не Date('июня', '2018'), а Date(6, 2018); не Person('президента', Name('Владимира', 'Путина')), а Person('президент', Name('Владимир', 'Путин')). В Yargy пользователь при разметке дерева разбора указывает, как нормировать вершины-атрибуты. В примере слово "июня" будет приведено к нормальной форме "июнь" и заменится на число "6" с помощью словаря MONTHS. Год и день просто приводятся к int:
MONTHS = { 'январь': 1, 'февраль': 2, 'март': 3, 'апрель': 4, 'мая': 5, 'июнь': 6, 'июль': 7, 'август': 8, 'сентябрь': 9, 'октябрь': 10, 'ноябрь': 11, 'декабрь': 12 } DATE = rule( DAY.interpretation( Date.day.custom(int) ), MONTH_NAME.interpretation( Date.month.normalized().custom(MONTHS.get) ), YEAR.interpretation( Date.year.custom(int) ) ).interpretation( Date ) parser = Parser(DATE) match = parser.match('18 июня 2016') match.fact
docs/index.ipynb
bureaucratic-labs/yargy
mit
Подробнее в <a href="ref.ipynb#Нормализация">справочнике</a>. Согласование Примитивная грамматика имён:
NAME = rule( gram('Name').interpretation( Name.first.inflected() ), gram('Surn').interpretation( Name.last.inflected() ) ).interpretation( Name )
docs/index.ipynb
bureaucratic-labs/yargy
mit
У неё есть две проблемы. Она срабатывает на словосочетаниях, где имя и фамилия в разных падежах:
parser = Parser(NAME) for match in parser.findall('Илье Ивановым, Павлом Семенов'): print([_.value for _ in match.tokens])
docs/index.ipynb
bureaucratic-labs/yargy
mit
Имя и фамилия приводятся к нормальной форме независимо, получается женщина "Иванов":
parser = Parser(NAME) for match in parser.findall('Сашу Иванову, Саше Иванову'): display(match.fact)
docs/index.ipynb
bureaucratic-labs/yargy
mit
В Yargy связь между словами и словосочетаниями устанавливается методом match. Для согласования по числу в match передаём number_relation, для согласования по падежу, роду и числу — gnc_relation:
from yargy.relations import gnc_relation gnc = gnc_relation() NAME = rule( gram('Name').interpretation( Name.first.inflected() ).match(gnc), gram('Surn').interpretation( Name.last.inflected() ).match(gnc) ).interpretation( Name ) parser = Parser(NAME) for match in parser.findall('Илье Ивановым, Павлом Семенов, Саша Быков'): print([_.value for _ in match.tokens]) parser = Parser(NAME) for match in parser.findall('Сашу Иванову, Саше Иванову'): display(match.fact)
docs/index.ipynb
bureaucratic-labs/yargy
mit
Review Before we start playing with the actual implementations let us review a couple of things about MDPs. A stochastic process has the Markov property if the conditional probability distribution of future states of the process (conditional on both past and present states) depends only upon the present state, not on the sequence of events that preceded it. -- Source: Wikipedia Often it is possible to model many different phenomena as a Markov process by being flexible with our definition of state. MDPs help us deal with fully-observable and non-deterministic/stochastic environments. For dealing with partially-observable and stochastic cases we make use of generalization of MDPs named POMDPs (partially observable Markov decision process). Our overall goal to solve a MDP is to come up with a policy which guides us to select the best action in each state so as to maximize the expected sum of future rewards. MDP To begin with let us look at the implementation of MDP class defined in mdp.py The docstring tells us what all is required to define a MDP namely - set of states,actions, initial state, transition model, and a reward function. Each of these are implemented as methods. Do not close the popup so that you can follow along the description of code below.
%psource MDP
mdp.ipynb
AmberJBlue/aima-python
mit
With this we have sucessfully represented our MDP. Later we will look at ways to solve this MDP. Grid MDP Now we look at a concrete implementation that makes use of the MDP as base class. The GridMDP class in the mdp module is used to represent a grid world MDP like the one shown in in Fig 17.1 of the AIMA Book. The code should be easy to understand if you have gone through the CustomMDP example.
%psource GridMDP
mdp.ipynb
AmberJBlue/aima-python
mit
Value Iteration Now that we have looked how to represent MDPs. Let's aim at solving them. Our ultimate goal is to obtain an optimal policy. We start with looking at Value Iteration and a visualisation that should help us understanding it better. We start by calculating Value/Utility for each of the states. The Value of each state is the expected sum of discounted future rewards given we start in that state and follow a particular policy pi.The algorithm Value Iteration (Fig. 17.4 in the book) relies on finding solutions of the Bellman's Equation. The intuition Value Iteration works is because values propagate. This point will we more clear after we encounter the visualisation. For more information you can refer to Section 17.2 of the book.
%psource value_iteration
mdp.ipynb
AmberJBlue/aima-python
mit
It takes as inputs two parameters an MDP to solve and epsilon the maximum error allowed in the utility of any state. It returns a dictionary containing utilities where the keys are the states and values represent utilities. Let us solve the sequencial_decision_enviornment GridMDP.
value_iteration(sequential_decision_environment)
mdp.ipynb
AmberJBlue/aima-python
mit
Visualization for Value Iteration To illustrate that values propagate out of states let us create a simple visualisation. We will be using a modified version of the value_iteration function which will store U over time. We will also remove the parameter epsilon and instead add the number of iterations we want.
def value_iteration_instru(mdp, iterations=20): U_over_time = [] U1 = {s: 0 for s in mdp.states} R, T, gamma = mdp.R, mdp.T, mdp.gamma for _ in range(iterations): U = U1.copy() for s in mdp.states: U1[s] = R(s) + gamma * max([sum([p * U[s1] for (p, s1) in T(s, a)]) for a in mdp.actions(s)]) U_over_time.append(U) return U_over_time
mdp.ipynb
AmberJBlue/aima-python
mit
Next, we define a function to create the visualisation from the utilities returned by value_iteration_instru. The reader need not concern himself with the code that immediately follows as it is the usage of Matplotib with IPython Widgets. If you are interested in reading more about these visit ipywidgets.readthedocs.io
%matplotlib inline import matplotlib.pyplot as plt from collections import defaultdict import time def make_plot_grid_step_function(columns, row, U_over_time): '''ipywidgets interactive function supports single parameter as input. This function creates and return such a function by taking in input other parameters ''' def plot_grid_step(iteration): data = U_over_time[iteration] data = defaultdict(lambda: 0, data) grid = [] for row in range(rows): current_row = [] for column in range(columns): current_row.append(data[(column, row)]) grid.append(current_row) grid.reverse() # output like book fig = plt.imshow(grid, cmap=plt.cm.bwr, interpolation='nearest') plt.axis('off') fig.axes.get_xaxis().set_visible(False) fig.axes.get_yaxis().set_visible(False) for col in range(len(grid)): for row in range(len(grid[0])): magic = grid[col][row] fig.axes.text(row, col, "{0:.2f}".format(magic), va='center', ha='center') plt.show() return plot_grid_step def make_visualize(slider): ''' Takes an input a slider and returns callback function for timer and animation ''' def visualize_callback(Visualize, time_step): if Visualize is True: for i in range(slider.min, slider.max + 1): slider.value = i time.sleep(float(time_step)) return visualize_callback columns = 4 rows = 3 U_over_time = value_iteration_instru(sequential_decision_environment) plot_grid_step = make_plot_grid_step_function(columns, rows, U_over_time) import ipywidgets as widgets from IPython.display import display iteration_slider = widgets.IntSlider(min=1, max=15, step=1, value=0) w=widgets.interactive(plot_grid_step,iteration=iteration_slider) display(w) visualize_callback = make_visualize(iteration_slider) visualize_button = widgets.ToggleButton(desctiption = "Visualize", value = False) time_select = widgets.ToggleButtons(description='Extra Delay:',options=['0', '0.1', '0.2', '0.5', '0.7', '1.0']) a = widgets.interactive(visualize_callback, Visualize = visualize_button, time_step=time_select) display(a)
mdp.ipynb
AmberJBlue/aima-python
mit
Tabular model A basic model that can be used on tabular data Embeddings
#|export def emb_sz_rule( n_cat:int # Cardinality of a category ) -> int: "Rule of thumb to pick embedding size corresponding to `n_cat`" return min(600, round(1.6 * n_cat**0.56)) #|export def _one_emb_sz(classes, n, sz_dict=None): "Pick an embedding size for `n` depending on `classes` if not given in `sz_dict`." sz_dict = ifnone(sz_dict, {}) n_cat = len(classes[n]) sz = sz_dict.get(n, int(emb_sz_rule(n_cat))) # rule of thumb return n_cat,sz
nbs/42_tabular.model.ipynb
fastai/fastai
apache-2.0
Through trial and error, this general rule takes the lower of two values: * A dimension space of 600 * A dimension space equal to 1.6 times the cardinality of the variable to 0.56. This provides a good starter for a good embedding space for your variables. For more advanced users who wish to lean into this practice, you can tweak these values to your discretion. It is not uncommon for slight adjustments to this general formula to provide more success.
#|export def get_emb_sz( to:(Tabular, TabularPandas), sz_dict:dict=None # Dictionary of {'class_name' : size, ...} to override default `emb_sz_rule` ) -> list: # List of embedding sizes for each category "Get embedding size for each cat_name in `Tabular` or `TabularPandas`, or populate embedding size manually using sz_dict" return [_one_emb_sz(to.classes, n, sz_dict) for n in to.cat_names] #|export class TabularModel(Module): "Basic model for tabular data." def __init__(self, emb_szs:list, # Sequence of (num_embeddings, embedding_dim) for each categorical variable n_cont:int, # Number of continuous variables out_sz:int, # Number of outputs for final `LinBnDrop` layer layers:list, # Sequence of ints used to specify the input and output size of each `LinBnDrop` layer ps:(float, list)=None, # Sequence of dropout probabilities for `LinBnDrop` embed_p:float=0., # Dropout probability for `Embedding` layer y_range=None, # Low and high for `SigmoidRange` activation use_bn:bool=True, # Use `BatchNorm1d` in `LinBnDrop` layers bn_final:bool=False, # Use `BatchNorm1d` on final layer bn_cont:bool=True, # Use `BatchNorm1d` on continuous variables act_cls=nn.ReLU(inplace=True), # Activation type for `LinBnDrop` layers lin_first:bool=True # Linear layer is first or last in `LinBnDrop` layers ): ps = ifnone(ps, [0]*len(layers)) if not is_listy(ps): ps = [ps]*len(layers) self.embeds = nn.ModuleList([Embedding(ni, nf) for ni,nf in emb_szs]) self.emb_drop = nn.Dropout(embed_p) self.bn_cont = nn.BatchNorm1d(n_cont) if bn_cont else None n_emb = sum(e.embedding_dim for e in self.embeds) self.n_emb,self.n_cont = n_emb,n_cont sizes = [n_emb + n_cont] + layers + [out_sz] actns = [act_cls for _ in range(len(sizes)-2)] + [None] _layers = [LinBnDrop(sizes[i], sizes[i+1], bn=use_bn and (i!=len(actns)-1 or bn_final), p=p, act=a, lin_first=lin_first) for i,(p,a) in enumerate(zip(ps+[0.],actns))] if y_range is not None: _layers.append(SigmoidRange(*y_range)) self.layers = nn.Sequential(*_layers) def forward(self, x_cat, x_cont=None): if self.n_emb != 0: x = [e(x_cat[:,i]) for i,e in enumerate(self.embeds)] x = torch.cat(x, 1) x = self.emb_drop(x) if self.n_cont != 0: if self.bn_cont is not None: x_cont = self.bn_cont(x_cont) x = torch.cat([x, x_cont], 1) if self.n_emb != 0 else x_cont return self.layers(x)
nbs/42_tabular.model.ipynb
fastai/fastai
apache-2.0
This model expects your cat and cont variables seperated. cat is passed through an Embedding layer and potential Dropout, while cont is passed though potential BatchNorm1d. Afterwards both are concatenated and passed through a series of LinBnDrop, before a final Linear layer corresponding to the expected outputs.
emb_szs = [(4,2), (17,8)] m = TabularModel(emb_szs, n_cont=2, out_sz=2, layers=[200,100]).eval() x_cat = torch.tensor([[2,12]]).long() x_cont = torch.tensor([[0.7633, -0.1887]]).float() out = m(x_cat, x_cont) #|export @delegates(TabularModel.__init__) def tabular_config(**kwargs): "Convenience function to easily create a config for `TabularModel`" return kwargs
nbs/42_tabular.model.ipynb
fastai/fastai
apache-2.0
Any direct setup of TabularModel's internals should be passed through here:
config = tabular_config(embed_p=0.6, use_bn=False); config
nbs/42_tabular.model.ipynb
fastai/fastai
apache-2.0
RandomForest Parameter Tuning ================================================================================================================= * Find optimal parameters through grid search and compare them with those obtained by greedy method * Among numerous scoring types such as recall, accuracy, precision, f1 score, etc, this project will focus on recall-related scores. * This is because, I believe, high 'true positive rate (tpr)' can be the clue to solve the problem I defined in this project
# baseline confirmation, implying that model has to perform at least as good as it from sklearn.dummy import DummyClassifier clf_Dummy = DummyClassifier(strategy='most_frequent') clf_Dummy = clf_Dummy.fit(X_train, y_train) print('baseline score =>', round(clf_Dummy.score(X_test, y_test), 2))
3_model selection_evalutation.ipynb
higee/amazon-helpful-review
mit
=================================================================================================================== 1. Find optimal parameter 1) Greedy Method (sequential parameter tuning) good point of sequential parameter tuing is that I can see the how certain parameter affects the performance. ...however, I found this not very useful when finding the optimal combination of parameters. ...so using this method after grid search would be fast and nice (1) n_estimators
from sklearn.metrics import recall_score from sklearn.ensemble import RandomForestClassifier from matplotlib.pyplot import axvline, axhline recall_range = [] n_estimator_range = [] for i in np.arange(10, 20, 1): clf_RF = RandomForestClassifier(oob_score=True, n_estimators=i).fit(X_train, y_train) clf_RF_predicted = clf_RF.predict(X_test) recall = round(recall_score(clf_RF_predicted, y_test), 2) n_estimator_range.append(i) recall_range.append(recall) dictionary = dict(zip(n_estimator_range, recall_range)) plt.figure(figsize=(10, 3)) plt.plot(n_estimator_range, recall_range, color='#EA5959', label='max recall: %(n)0.2f \n%(s)s: %(v)2d' % {'n':max(dictionary.values()), 's':'n estimator', 'v':max(dictionary, key=lambda i: dictionary[i])}) plt.scatter([max(dictionary, key=lambda i: dictionary[i]), ], [max(dictionary.values()), ], 80, color='#EA5959') axhline(max(dictionary.values()), color='#EA5959', linewidth=1, linestyle='--') axvline(max(dictionary, key=lambda i: dictionary[i]), color='#EA5959', linewidth=1, linestyle='--') plt.legend(loc='lower right', prop={'size':12}) plt.xlim(min(n_estimator_range), max(n_estimator_range)) plt.ylim(min(recall_range)*0.98, max(recall_range)*1.02) plt.ylabel('Recall') plt.xlabel('n estimator');
3_model selection_evalutation.ipynb
higee/amazon-helpful-review
mit
(2) max features
recall_range = [] max_features_range = [] for i in np.arange(1, 15, 1): clf_RF = RandomForestClassifier(oob_score=True, n_estimators=18, max_features=i).fit(X_train, y_train) clf_RF_predicted = clf_RF.predict(X_test) recall = round(recall_score(clf_RF_predicted, y_test), 2) max_features_range.append(i) recall_range.append(recall) dictionary = dict(zip(max_features_range, recall_range)) plt.figure(figsize=(10, 3)) plt.plot(max_features_range, recall_range, color='#EA5959', label='max recall: %(n)0.2f \n%(s)s: %(v)2d' % {'n':max(dictionary.values()), 's':'max features', 'v':max(dictionary, key=lambda i: dictionary[i])}) plt.scatter([max(dictionary, key=lambda i: dictionary[i]), ], [max(dictionary.values()), ], 80, color='#EA5959') axhline(max(dictionary.values()), color='#EA5959', linewidth=1, linestyle='--') axvline(max(dictionary, key=lambda i: dictionary[i]), color='#EA5959', linewidth=1, linestyle='--') plt.legend(loc='lower right', prop={'size':12}) plt.xlim(min(max_features_range), max(max_features_range)) plt.ylim(min(recall_range)*0.98, max(recall_range)*1.02) plt.ylabel('Recall') plt.xlabel('max features');
3_model selection_evalutation.ipynb
higee/amazon-helpful-review
mit
(3) min sample leaf
recall_range = [] min_samples_leaf_range = [] for i in np.arange(1, 20, 1): clf_RF = RandomForestClassifier(oob_score=True, n_estimators=18, max_features=14, min_samples_leaf=i).fit(X_train, y_train) clf_RF_predicted = clf_RF.predict(X_test) recall = round(recall_score(clf_RF_predicted, y_test), 2) min_samples_leaf_range.append(i) recall_range.append(recall) dictionary = dict(zip(min_samples_leaf_range, recall_range)) plt.figure(figsize=(10, 3)) plt.plot(min_samples_leaf_range, recall_range, color='#EA5959', label='max recall: %(n)0.2f \n%(s)s: %(v)2d' % {'n':max(dictionary.values()), 's':'min samples leaf', 'v':max(dictionary, key=lambda i: dictionary[i])}) plt.scatter([max(dictionary, key=lambda i: dictionary[i]), ], [max(dictionary.values()), ], 80, color='#EA5959') axhline(max(dictionary.values()), color='#EA5959', linewidth=1, linestyle='--') axvline(max(dictionary, key=lambda i: dictionary[i]), color='#EA5959', linewidth=1, linestyle='--') plt.legend(loc='lower right', prop={'size':12}) plt.xlim(min(min_samples_leaf_range), max(min_samples_leaf_range)) plt.ylim(min(recall_range)*0.98, max(recall_range)*1.02) plt.ylabel('Recall') plt.xlabel('min_samples_leaf_range');
3_model selection_evalutation.ipynb
higee/amazon-helpful-review
mit
2) Grid Search very efficient way to find optimal parameter values assigning 'appropriate' value range seems essential in grid search. For instance, I set the value range for 'max_features' to 10 to 28, since I wanted the model to be robust, at the expense of performance to some extent.
from sklearn.pipeline import Pipeline pipeline_clf_train = Pipeline( steps=[ ('clf_RF', RandomForestClassifier()), ] ); from sklearn.grid_search import GridSearchCV parameters = { 'clf_RF__min_samples_leaf' : np.arange(1, 28, 1), 'clf_RF__max_features' : np.arange(10, 28, 1), 'clf_RF__criterion' :['gini', 'entropy'], 'clf_RF__n_estimators' : [10], #'clf_RF__oob_score' : ['True] } gs_clf = GridSearchCV(pipeline_clf_train, parameters, n_jobs=-1, scoring='recall') gs_clf = gs_clf.fit(X_train, y_train) best_parameters, score, _ = max(gs_clf.grid_scores_, key=lambda x: x[1]) for param_name in sorted(parameters.keys()): print("%s: %r" % (param_name, best_parameters[param_name])) print('------------------------------') print('recall score :', score.round(2))
3_model selection_evalutation.ipynb
higee/amazon-helpful-review
mit
2. Take a look at confusion matrix (what is confusion matrix?) 1) Greedy method focal point of confusion matrix in this project is 'helpful' + 'recall' with parameters obtained from greed method, I got 0.68/1.0 meaning that out of 129 'helpful reviews' I predicted 88 reviews correctly.
clf_RF = RandomForestClassifier(n_estimators=18, max_features=14, min_samples_leaf=9, oob_score=True).fit(X_train, y_train) clf_RF_predicted = clf_RF.predict(X_test) from sklearn.metrics import classification_report, confusion_matrix target_names = ['not helpful', 'helpful'] print(classification_report(y_test, clf_RF_predicted, target_names=target_names)) plt.figure(figsize=(4,4)) cm = confusion_matrix(y_test, clf_RF_predicted) print(cm) target_names = ['not helpful', 'helpful'] plt.grid(False) plt.imshow(cm, interpolation='nearest', cmap=plt.cm.Blues) plt.title('Confusion matrix') plt.colorbar() tick_marks = np.arange(len(target_names)) plt.xticks(tick_marks, target_names, rotation=45) plt.yticks(tick_marks, target_names) plt.tight_layout() plt.ylabel('True label') plt.xlabel('Predicted label');
3_model selection_evalutation.ipynb
higee/amazon-helpful-review
mit
2) Grid Search compared to the above, performance is better (0.75/1.0)
clf_RF = RandomForestClassifier(n_estimators=10, max_features=19, min_samples_leaf=27, criterion='entropy', oob_score=True).fit(X_train, y_train) clf_RF_predicted = clf_RF.predict(X_test) from sklearn.metrics import classification_report, confusion_matrix target_names = ['not helpful', 'helpful'] print(classification_report(y_test, clf_RF_predicted, target_names=target_names)) plt.figure(figsize=(4, 4)) cm = confusion_matrix(y_test, clf_RF_predicted) print(cm) target_names = ['not helpful', 'helpful'] plt.grid(False) plt.imshow(cm, interpolation='nearest', cmap=plt.cm.Blues) plt.title('Confusion matrix') plt.colorbar() tick_marks = np.arange(len(target_names)) plt.xticks(tick_marks, target_names, rotation=45) plt.yticks(tick_marks, target_names) plt.tight_layout() plt.ylabel('True label') plt.xlabel('Predicted label');
3_model selection_evalutation.ipynb
higee/amazon-helpful-review
mit