markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
We typically combine the first two lines into one expression like this:
age = int(input("Enter your age: ")) type(age)
content/lessons/02-Variables/LAB-Variables.ipynb
IST256/learn-python
mit
1.1 You Code: Debuging The following program has errors in it. Your task is to fix the errors so that: your age can be input and convertred to an integer. the program outputs your age and your age next year. For example: Enter your age: 45 Today you are 45 next year you will be 46
# TODO: Debug this code age = input("Enter your age: ") nextage = age + 1 print("Today you are age next year you will be {nextage}")
content/lessons/02-Variables/LAB-Variables.ipynb
IST256/learn-python
mit
Format Codes Python has some string format codes which allow us to control the output of our variables. %s = format variable as str %d = format variable as int %f = format variable as float You can also include the number of spaces to use for example %5.2f prints a float with 5 spaces 2 to the right of the decimal p...
name = "Mike" age = 45 gpa = 3.4 print("%s is %d years old. His gpa is %.3f" % (name, age,gpa))
content/lessons/02-Variables/LAB-Variables.ipynb
IST256/learn-python
mit
Formatting with F-Strings The other method of formatting data in Python is F-strings. As we saw in the last lab, F-strings use interpolation to specify the variables we would like to print in-line with the print string. You can format an f-string {var:d} formats var as integer {var:f} formats var as float {var:.3f} fo...
name ="Mike" wage = 15 print(f"{name} makes ${wage:.2f} per hour")
content/lessons/02-Variables/LAB-Variables.ipynb
IST256/learn-python
mit
1.2 You Code Re-write the program from (1.1 You Code) so that the print statement uses format codes. Remember: do not copy code, as practice, re-write it.
# TODO: Write code here
content/lessons/02-Variables/LAB-Variables.ipynb
IST256/learn-python
mit
1.3 You Code Use F-strings or format codes to Print the PI variable out 3 times. Once as a string, Once as an int, and Once as a float to 4 decimal places.
#TODO: Write Code Here
content/lessons/02-Variables/LAB-Variables.ipynb
IST256/learn-python
mit
Putting it all together: Fred's Fence Estimator Fred's Fence has hired you to write a program to estimate the cost of their fencing projects. For a given length and width you will calculate the number of 6 foot fence sections, and the total cost of the project. Each fence section costs $23.95. Assume the posts and lab...
# TODO: Write your code here
content/lessons/02-Variables/LAB-Variables.ipynb
IST256/learn-python
mit
Definition of the Pohlhausen profile The Pohlhausen profile is used to generalize the flat plate profile to the case of curved boundaries or flows with external pressure graidients. The profile is defined as $$ \frac u U = F(\eta)+\lambda G(\eta) , \quad \eta=\frac y\delta$$ where $$ F = 2\eta-2\eta^3+\eta^4 $$ $$ G = ...
def pohlF(eta): return 2*eta-2*eta**3+eta**4 def pohlG(eta): return eta/6*(1-eta)**3 def pohl(eta,lam): return pohlF(eta)+lam*pohlG(eta)
lessons/Pohlhausen.ipynb
ultiyuan/test0
gpl-2.0
Let's take a look at how this changes with $\lambda$.
def pohlPlot(lam): pyplot.figure(figsize=(size,size)) pyplot.xlabel('u/U', fontsize=16) pyplot.axis([-0.1,1.1,0,1]) pyplot.ylabel('y/del', fontsize=16) eta = numpy.linspace(0.0,1.0,100) pyplot.plot(pohlF(eta),eta, lw=1, color='black') pyplot.plot(pohl(eta,lam),eta, lw=2, color='green')
lessons/Pohlhausen.ipynb
ultiyuan/test0
gpl-2.0
Change lam below to see how the profile changes compared to the flat plate value.
pohlPlot(lam=7)
lessons/Pohlhausen.ipynb
ultiyuan/test0
gpl-2.0
Loading a single rate The Rate class holds a single reaction rate and takes a reaclib file as input. There are a lot of methods in the Rate class that allow you to explore the rate.
c13pg = pyrl.Rate("c13-pg-n14-nacr")
examples/pynucastro-examples.ipynb
pyreaclib/pyreaclib
bsd-3-clause
the original reaclib source we can easily see the original source from ReacLib
print(c13pg.original_source)
examples/pynucastro-examples.ipynb
pyreaclib/pyreaclib
bsd-3-clause
evaluate the rate at a given temperature (in K) This is just the temperature dependent portion of the rate, usually expressed as $N_A \langle \sigma v \rangle$
c13pg.eval(1.e9)
examples/pynucastro-examples.ipynb
pyreaclib/pyreaclib
bsd-3-clause
human readable string We can print out a string describing the rate, and the nuclei involved
print(c13pg)
examples/pynucastro-examples.ipynb
pyreaclib/pyreaclib
bsd-3-clause
The nuclei involved are all Nucleus objects. They have members Z and N that give the proton and neutron number
print(c13pg.reactants) print(c13pg.products) r2 = c13pg.reactants[1]
examples/pynucastro-examples.ipynb
pyreaclib/pyreaclib
bsd-3-clause
Note that each of the nuclei are a pynucastro Nucleus type
type(r2) print(r2.Z, r2.N)
examples/pynucastro-examples.ipynb
pyreaclib/pyreaclib
bsd-3-clause
temperature sensitivity We can find the temperature sensitivity about some reference temperature. This is the exponent when we write the rate as $$r = r_0 \left ( \frac{T}{T_0} \right )^\nu$$. We can estimate this given a reference temperature, $T_0$
c13pg.get_rate_exponent(2.e7)
examples/pynucastro-examples.ipynb
pyreaclib/pyreaclib
bsd-3-clause
plot the rate's temperature dependence A reaction rate has a complex temperature dependence that is defined in the reaclib files. The plot() method will plot this for us
c13pg.plot()
examples/pynucastro-examples.ipynb
pyreaclib/pyreaclib
bsd-3-clause
A rate also knows its density dependence -- this is inferred from the reactants in the rate description and is used to construct the terms needed to write a reaction network. Note: since we want reaction rates per gram, this number is 1 less than the number of nuclei
c13pg.dens_exp
examples/pynucastro-examples.ipynb
pyreaclib/pyreaclib
bsd-3-clause
Working with a group of rates A RateCollection() class allows us to work with a group of rates. This is used to explore their relationship. Other classes (introduced soon) are built on this and will allow us to output network code directly.
files = ["c12-pg-n13-ls09", "c13-pg-n14-nacr", "n13--c13-wc12", "n13-pg-o14-lg06", "n14-pg-o15-im05", "n15-pa-c12-nacr", "o14--n14-wc12", "o15--n15-wc12"] rc = pyrl.RateCollection(files)
examples/pynucastro-examples.ipynb
pyreaclib/pyreaclib
bsd-3-clause
Printing a rate collection shows all the rates
print(rc)
examples/pynucastro-examples.ipynb
pyreaclib/pyreaclib
bsd-3-clause
More detailed information is provided by network_overview()
print(rc.network_overview())
examples/pynucastro-examples.ipynb
pyreaclib/pyreaclib
bsd-3-clause
show a network diagram We visualize the network using NetworkX
rc.plot()
examples/pynucastro-examples.ipynb
pyreaclib/pyreaclib
bsd-3-clause
Explore the network's rates To evaluate the rates, we need a composition
comp = pyrl.Composition(rc.get_nuclei()) comp.set_solar_like()
examples/pynucastro-examples.ipynb
pyreaclib/pyreaclib
bsd-3-clause
Plot nuclides on a grid Nuclides in a network may also be visualized as cells on a grid of Z vs. N, colored by some quantity. This can be more interpretable for large networks. Calling gridplot without any arguments will just plot the grid - to see anything interesting we need to supply some conditions. Here is a plot ...
rc.gridplot(comp=comp, color_field="X", scale="log", area=36)
examples/pynucastro-examples.ipynb
pyreaclib/pyreaclib
bsd-3-clause
Unlike the network plot, this won't omit hydrogen and helium by default. To just look at the heavier nuclides, we can define a function to filter by proton number:
ff = lambda nuc: nuc.Z > 2 rc.gridplot(comp=comp, rho=1e4, T=1e8, color_field="activity", scale="log", filter_function=ff, area=20, cmap="Blues")
examples/pynucastro-examples.ipynb
pyreaclib/pyreaclib
bsd-3-clause
Integrating networks If we don't just want to explore the network interactively in a notebook, but want to output code to run integrate it, we need to create one of PythonNetwork or StarKillerNetwork
pynet = pyrl.PythonNetwork(files)
examples/pynucastro-examples.ipynb
pyreaclib/pyreaclib
bsd-3-clause
A network knows how to express the terms that make up the function (in the right programming language). For instance, you can get the term for the ${}^{13}\mathrm{C} (p,\gamma) {}^{14}\mathrm{N}$ rate as:
print(pynet.ydot_string(c13pg))
examples/pynucastro-examples.ipynb
pyreaclib/pyreaclib
bsd-3-clause
and the code needed to evaluate that rate (the T-dependent part) as:
print(pynet.function_string(c13pg))
examples/pynucastro-examples.ipynb
pyreaclib/pyreaclib
bsd-3-clause
The write_network() method will output the python code needed to define the RHS of a network for integration with the SciPy integrators
pynet.write_network("cno_test_integrate.py") %cat cno_test_integrate.py
examples/pynucastro-examples.ipynb
pyreaclib/pyreaclib
bsd-3-clause
We can now import the network that was just created and integrate it using the SciPy ODE solvers
import cno_test_integrate as cno from scipy.integrate import solve_ivp import numpy as np
examples/pynucastro-examples.ipynb
pyreaclib/pyreaclib
bsd-3-clause
A network plot
import matplotlib.pyplot as plt fig = plt.figure() ax = fig.add_subplot(111) for n in range(cno.nnuc): ax.loglog(sol.t, sol.y[n,:] * cno.A[n], label=f"X({n})") ax.set_xlim(1.e10, 1.e20) ax.set_ylim(1.e-8, 1.0) ax.legend(fontsize="small") fig.set_size_inches((10, 8))
examples/pynucastro-examples.ipynb
pyreaclib/pyreaclib
bsd-3-clause
Backpropagation In the next cells, you will be asked to complete functions for the Jacobian of the cost function with respect to the weights and biases. We will start with layer 3, which is the easiest, and work backwards through the layers. We'll define our Jacobians as, $$ \mathbf{J}{\mathbf{W}^{(3)}} = \frac{\partia...
# GRADED FUNCTION # Jacobian for the third layer weights. There is no need to edit this function. def J_W3 (x, y) : # First get all the activations and weighted sums at each layer of the network. a0, z1, a1, z2, a2, z3, a3 = network_function(x) # We'll use the variable J to store parts of our result as we ...
Backpropagation.ipynb
hktxt/MachineLearning
gpl-3.0
We'll next do the Jacobian for the Layer 2. The partial derivatives for this are, $$ \frac{\partial C}{\partial \mathbf{W}^{(2)}} = \frac{\partial C}{\partial \mathbf{a}^{(3)}} \left( \frac{\partial \mathbf{a}^{(3)}}{\partial \mathbf{a}^{(2)}} \right) \frac{\partial \mathbf{a}^{(2)}}{\partial \mathbf{z}^...
# GRADED FUNCTION # Compare this function to J_W3 to see how it changes. # There is no need to edit this function. def J_W2 (x, y) : #The first two lines are identical to in J_W3. a0, z1, a1, z2, a2, z3, a3 = network_function(x) J = 2 * (a3 - y) # the next two lines implement da3/da2, first σ' and ...
Backpropagation.ipynb
hktxt/MachineLearning
gpl-3.0
Layer 1 is very similar to Layer 2, but with an addition partial derivative term. $$ \frac{\partial C}{\partial \mathbf{W}^{(1)}} = \frac{\partial C}{\partial \mathbf{a}^{(3)}} \left( \frac{\partial \mathbf{a}^{(3)}}{\partial \mathbf{a}^{(2)}} \frac{\partial \mathbf{a}^{(2)}}{\partial \mathbf{a}^{(1)}} \...
# GRADED FUNCTION # Fill in all incomplete lines. # ===YOU SHOULD EDIT THIS FUNCTION=== def J_W1 (x, y) : a0, z1, a1, z2, a2, z3, a3 = network_function(x) J = 2 * (a3 - y) J = J * d_sigma(z3) J = (J.T @ W3).T J = J * d_sigma(z2) J = (J.T @ W2).T J = J * d_sigma(z1) J = J @ a0.T / x.size...
Backpropagation.ipynb
hktxt/MachineLearning
gpl-3.0
Test your code before submission To test the code you've written above, run all previous cells (select each cell, then press the play button [ ▶| ] or press shift-enter). You can then use the code below to test out your function. You don't need to submit these cells; you can edit and run them as much as you like. First...
x, y = training_data() reset_network()
Backpropagation.ipynb
hktxt/MachineLearning
gpl-3.0
Next, if you've implemented the assignment correctly, the following code will iterate through a steepest descent algorithm using the Jacobians you have calculated. The function will plot the training data (in green), and your neural network solutions in pink for each iteration, and orange for the last output. It takes ...
plot_training(x, y, iterations=50000, aggression=7, noise=1)
Backpropagation.ipynb
hktxt/MachineLearning
gpl-3.0
위 함수를 이용하여 동전을 30번 던져서 앞면이 정확히 24번 나올 확률을 계산할 수 있다.
binom_distribution(30, 24, 1/2.)
previous/y2017/W08-numpy-hypothesis_test/.ipynb_checkpoints/GongSu19-Statistical_Hypothesis_Test-checkpoint.ipynb
liganega/Gongsu-DataSci
gpl-3.0
앞서 확률적으로 0.0007%, 즉 10,000번에 7번 정도 나와야 한다고 이론적으로 계산한 결과와 비슷한 결과가 나온다는 것을 위 실험을 반복적으로 실행하면서 확인해볼 수 있다. 정상적인 동전인가? 모의실험의 결과 역시 동전을 30번 던져서 24번 이상 앞면이 나올 확률이 5%에 크게 미치지 못한다. 이런 경우 우리는 사용한 동전이 정상적인 동전이라는 영가설(H0)을 받아들일 수 없다고 말한다. 즉, 기각해야 한다. 가설검정을 위해 지금까지 다룬 내용을 정리하면 다음과 같다. 가설검정 6단계 1) 검정할 가설을 결정한다. * 영가설: 여기서는 "정상적인 동...
def coin_experiment_2(num_repeat): experiment = np.random.randint(0,2,[num_repeat, num_tosses]) return experiment.sum(axis=1) heads_count = coin_experiment_2(100000) sns.distplot(heads_count, kde=False) mask = heads_count>=24 heads_count[mask].shape[0]/100000
previous/y2017/W08-numpy-hypothesis_test/.ipynb_checkpoints/GongSu19-Statistical_Hypothesis_Test-checkpoint.ipynb
liganega/Gongsu-DataSci
gpl-3.0
Train We're now ready to train our model. If you run into OOM (out of memory) issues during training, try to decrease the batch_size.
checkpoint = "checkpoints/best_model.ckpt" # Split data to training and validation sets train_source = source_letter_ids[batch_size:] train_target = target_letter_ids[batch_size:] valid_source = source_letter_ids[:batch_size] valid_target = target_letter_ids[:batch_size] (valid_targets_batch, valid_sources_batch, val...
seq2seq/sequence_to_sequence_implementation.ipynb
swirlingsand/deep-learning-foundations
mit
Prediction
def source_to_seq(text): '''Prepare the text for the model''' sequence_length = 7 return [source_letter_to_int.get(word, source_letter_to_int['<UNK>']) for word in text]+ [source_letter_to_int['<PAD>']]*(sequence_length-len(text)) input_sentence = 'ih' text = source_to_seq(input_sentence) loaded_graph ...
seq2seq/sequence_to_sequence_implementation.ipynb
swirlingsand/deep-learning-foundations
mit
Analyzing Income Distribution The American Community Survey (published by the US Census) annually reports the number of individuals in a given income bracket at the state level. We can use this information, stored in Data Commons, to visualize disparity in income for each state in the US. Our goal for this tutorial wil...
# Import the Data Commons Pandas library import datacommons_pandas as dc # Import other libraries import pandas as pd import matplotlib.pyplot as plt import numpy as np
notebooks/analyzing_income_distribution.ipynb
datacommonsorg/api-python
apache-2.0
Getting the Data The Data Commons graph identifies 16 different income brackets. The list of these variable can be found under the "Household" category in our list of StatisticalVariables. We can use get_places_in to get all states within the United States. We can then call build_multivariate_dataframe on the list of s...
states = dc.get_places_in(['country/USA'], 'State')['country/USA'] # A list of income bracket StatisticalVariables income_brackets = [ "Count_Household_IncomeOfUpto10000USDollar", "Count_Household_IncomeOf10000To14999USDollar", "Count_Household_IncomeOf15000To19...
notebooks/analyzing_income_distribution.ipynb
datacommonsorg/api-python
apache-2.0
To get the names of states, we can use the get_property_values function:
# Get all state names and store it in a column "name" # Get the first name, if there are multiple for a state data.insert(0, 'name', data.index.map(dc.get_property_values(data.index, 'name')).str[0]) data.head(5)
notebooks/analyzing_income_distribution.ipynb
datacommonsorg/api-python
apache-2.0
Analyzing the Data Let's plot our data as a histogram. Notice that the income ranges as tabulated by the US Census are not equal. At the low end, the range is 0-9999, whereas, towards the top, the range 150,000-199,999 is five times as broad! We will make the width of each of the columns correspond to their range, and ...
# Bar chart endpoints (for calculating bar width) label_to_range = { "Count_Household_IncomeOfUpto10000USDollar": [0, 9999], "Count_Household_IncomeOf10000To14999USDollar": [10000, 14999], "Count_Household_IncomeOf15000To19999USDollar": [15000, 19999], "Count_Household_IncomeOf20000To24999USDollar": [20000, 249...
notebooks/analyzing_income_distribution.ipynb
datacommonsorg/api-python
apache-2.0
We can then call this code with a state to plot the income bracket sizes.
#@title Enter State to plot { run: "auto" } state_name = "Idaho" #@param ["Missouri", "Arkansas", "Arizona", "Ohio", "Connecticut", "Vermont", "Illinois", "South Dakota", "Iowa", "Oklahoma", "Kansas", "Washington", "Oregon", "Hawaii", "Minnesota", "Idaho", "Alaska", "Colorado", "Delaware", "Alabama", "North Dakota", "M...
notebooks/analyzing_income_distribution.ipynb
datacommonsorg/api-python
apache-2.0
and we can display the raw table of values.
# Additionally print the table of income bracket sizes result
notebooks/analyzing_income_distribution.ipynb
datacommonsorg/api-python
apache-2.0
Introduction to TensorFlow Part 2 - Debugging and Control Flow <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://colab.research.google.com/github/google/tf-quant-finance/blob/master/tf_quant_finance/examples/jupyter_notebooks/Introduction_to_TensorFlow_Part_2_-_Debugging_and_...
#@title Upgrade to TensorFlow 2.5+ !pip install --upgrade tensorflow #@title Install and import Libraries for this colab. RUN ME FIRST! !pip install matplotlib import matplotlib.pyplot as plt import tensorflow as tf from tensorflow.python.summary.writer.writer import FileWriter %load_ext tensorboard
tf_quant_finance/examples/jupyter_notebooks/Introduction_to_TensorFlow_Part_2_-_Debugging_and_Control_Flow.ipynb
google/tf-quant-finance
apache-2.0
What this notebook covers This notebook carries on from part 1, and covers the basics of control flow and debugging in tensorflow: * various debugging aids * loops * conditionals These features are used in graph mode of inside a tf.function. For simplicity eager execution is disabled throughout the training. Deb...
def plus_one(x): print("input has type %s, value %s"%(type(x), x)) output = x + 1.0 print("output has type %s, value %s"%(type(output), output)) return output # Let us create a graph where `plus_one` is invoked during the graph contruction g = tf.Graph() with g.as_default(): x = tf.constant([1.0,2.0,3.0]) ...
tf_quant_finance/examples/jupyter_notebooks/Introduction_to_TensorFlow_Part_2_-_Debugging_and_Control_Flow.ipynb
google/tf-quant-finance
apache-2.0
tf.print Full Documentation The tf.print op is another useful debugging tool. It takes any number of tensors and python objects, and prints them to stdout. There are a few optional parameters, to control formatting and where it prints to. See the documentation for details.
# Define a TensorFlow function @tf.function def print_fn(x): # Note that `print_trace` is a TensorFlow Op. See the next section for details print_trace = tf.print( "`input` has value", x, ", type", type(x), "and shape", tf.shape(x)) # Create some inputs a = tf.constant([1, 2]) # Call the function print_fn(...
tf_quant_finance/examples/jupyter_notebooks/Introduction_to_TensorFlow_Part_2_-_Debugging_and_Control_Flow.ipynb
google/tf-quant-finance
apache-2.0
If you're using eager execution mode, that's all you need to know. For deferred execution however there are some significant complications that we'll discuss in the next section. tf.control_dependencies Full Documentation One of the easiest optimisations tensorflow makes when in deferred mode is to eliminate unused ops...
g = tf.Graph() with g.as_default(): a = tf.constant(1) + tf.constant(1) print_trace = tf.print("a is set to ", a) b = a * 2 with tf.compat.v1.Session(graph=g) as sess: results = sess.run(b)
tf_quant_finance/examples/jupyter_notebooks/Introduction_to_TensorFlow_Part_2_-_Debugging_and_Control_Flow.ipynb
google/tf-quant-finance
apache-2.0
Then we don't get any output. Nothing depends on print_trace (in fact nothing can depend on it: tf.print doesn't return anything to depend on), so it gets dropped from the graph before execution occurs. If you want print_trace to be evaluated, then you need to ask for it explicitly:
g = tf.Graph() with g.as_default(): a = tf.constant(1) + tf.constant(1) print_trace = tf.compat.v1.print("a is set to", a) b = a * 2 with tf.compat.v1.Session(graph=g) as sess: results = sess.run((b, print_trace))
tf_quant_finance/examples/jupyter_notebooks/Introduction_to_TensorFlow_Part_2_-_Debugging_and_Control_Flow.ipynb
google/tf-quant-finance
apache-2.0
That's fine for our noddy sample above. But obviously has problems as your graph grows larger or the sess.run method gets further removed from the graph definition. The solution for that is tf.control_dependencies. This signals to tensorflow that the given set of prerequisite ops must be evaluated before a set of depen...
g = tf.Graph() with g.as_default(): a = tf.constant(1) + tf.constant(1) print_trace = tf.print("a is set to", a) hello_world = tf.print("hello world") with tf.control_dependencies((print_trace, hello_world)): # print_trace and hello_world will always be evaluated # before b can be evaluated b = a...
tf_quant_finance/examples/jupyter_notebooks/Introduction_to_TensorFlow_Part_2_-_Debugging_and_Control_Flow.ipynb
google/tf-quant-finance
apache-2.0
Note that if all of the dependent ops are pruned from the dependency tree and thus not evaluated, then the prerequisites will not be evaluated either: e.g. if we call sess.run(c) in the example above,then print_trace and hello_world won't be evaluated
# Nothing gets printed with tf.compat.v1.Session(graph=g) as sess: results = sess.run(c)
tf_quant_finance/examples/jupyter_notebooks/Introduction_to_TensorFlow_Part_2_-_Debugging_and_Control_Flow.ipynb
google/tf-quant-finance
apache-2.0
tf.debugging.Assert Full Documentation In addition to tf.print, the other common use of control_dependencies is tf.debugging.Assert. This op does what you'd expect: checks a boolean condition and aborts execution with a InvalidArgumentError if the condition is not true. Just like tf.print, it is likely to be pruned fro...
g = tf.Graph() with g.as_default(): x = tf.compat.v1.placeholder(tf.float32, shape=[]) with tf.control_dependencies([ tf.debugging.Assert(tf.not_equal(x, 0), ["Invalid value for x:",x])]): y = 2.0 / x with tf.compat.v1.Session(graph=g) as sess: try: results = sess.run(y, feed_dict={x: 0.0}) ex...
tf_quant_finance/examples/jupyter_notebooks/Introduction_to_TensorFlow_Part_2_-_Debugging_and_Control_Flow.ipynb
google/tf-quant-finance
apache-2.0
There are also a bunch of helper methods, such as * assert_equal * assert_positive * assert_rank_at_least * etc. to simplify common uses of tf.Assert So our sample above could have been written as
g = tf.Graph() with g.as_default(): x = tf.compat.v1.placeholder(tf.float32, shape=[]) with tf.control_dependencies([tf.debugging.assert_none_equal(x, 0.0)]): y = 2.0 / x with tf.compat.v1.Session(graph=g) as sess: try: results = sess.run(y, feed_dict={x: 0.0}) except tf.errors.InvalidArgumentError a...
tf_quant_finance/examples/jupyter_notebooks/Introduction_to_TensorFlow_Part_2_-_Debugging_and_Control_Flow.ipynb
google/tf-quant-finance
apache-2.0
Control Flow tf.cond Full Documentation The cond operand is the TensorFlow equivalent of if-else. It takes * a condition, which must resolve down to a single scalar boolean * true_fn: a python callable to generate one or more tensors that will be evaluated if the condition is true * false_fn: same as true_fn, bu...
# This won't work try: tf.cond(tf.constant(True), tf.constant(1), tf.constant(2)) except TypeError as e: pass # You need a callable: tf.cond(tf.constant(True), lambda: tf.constant(1), lambda: tf.constant(2))
tf_quant_finance/examples/jupyter_notebooks/Introduction_to_TensorFlow_Part_2_-_Debugging_and_Control_Flow.ipynb
google/tf-quant-finance
apache-2.0
The exact order of execution is a little complicated * true_fn and false_fn are executed just once, when the graph is being built. * the tensors created by true_fn (if the condition is true) or false_fn (if the condition is false) will be evaluated stricly after the condition has been evaluated * the tensors cre...
def dependency_fn(): print ("DEPENDENCY: I'm always evaluated at execution time because I'm a dependency\n") return tf.constant(2) dependency = tf.py_function(dependency_fn, inp=[], Tout=tf.int32) def true_op_fn(): print ("TRUE_OP_FN: I'm evaluated at execution time because condition is True\n") return 1 def ...
tf_quant_finance/examples/jupyter_notebooks/Introduction_to_TensorFlow_Part_2_-_Debugging_and_Control_Flow.ipynb
google/tf-quant-finance
apache-2.0
tf.while_loop Full Documentation This is one of the more complicated ops in TensorFlow, so we'll take things step by step. The most important parameter is loop_vars. This is a tuple/list of tensors. Next up is cond this is a python callable that should take the same number of arguments as loop_vars contains, and r...
g = tf.Graph() with g.as_default(): index = tf.constant(1) accumulator = tf.constant(0) loop = tf.while_loop( loop_vars=[index, accumulator], cond = lambda idx, acc: idx < 4, body = lambda idx, acc: [idx+1, acc + idx] ) with tf.compat.v1.Session() as sess: with FileWriter("logs", sess...
tf_quant_finance/examples/jupyter_notebooks/Introduction_to_TensorFlow_Part_2_-_Debugging_and_Control_Flow.ipynb
google/tf-quant-finance
apache-2.0
but our second-level approximation will do for now). Notice how index_iteration_3 doesn't depend the accumulator values at all. Thus assuming our accumulator tensors were doing something more complicated than adding two integers, and assuming we were running on hardware with plenty of execution units, then it's possibl...
# First let us explicitly disable Autograph @tf.function(autograph=False) def loop_fn(index, max_iterations): for index in range(max_iterations): index += 1 if index == 4: tf.print('index is equal to 4') return index # Create some inputs index = tf.constant(0) max_iterations = tf.constant(5) # Try ...
tf_quant_finance/examples/jupyter_notebooks/Introduction_to_TensorFlow_Part_2_-_Debugging_and_Control_Flow.ipynb
google/tf-quant-finance
apache-2.0
Exercise: Mandlebrot set Recall that the Mandelbrot set is defined as the set of values of $c$ in the complex plane such that the recursion: $z_{n+1} = z_{n}^2 + c$ does not diverge. It is known that all such values of $c$ lie inside the circle of radius $2$ around the origin. So what we'll do is Create a 2-d tensor c...
MAX_ITERATIONS = 64 NUM_PIXELS = 512 def GenerateGrid(nX, nY, bottom_left=(-1.0, -1.0), top_right=(1.0, 1.0)): """Generates a complex matrix of shape [nX, nY]. Generates an evenly spaced grid of complex numbers spanning the rectangle between the supplied diagonal points. Args: nX: A positive intege...
tf_quant_finance/examples/jupyter_notebooks/Introduction_to_TensorFlow_Part_2_-_Debugging_and_Control_Flow.ipynb
google/tf-quant-finance
apache-2.0
Настоящие сложные грамматики для топонимов собраны в репозитории <a href="https://github.com/natasha/natasha">Natasha</a>. Найти подстроку в тексте не достаточно, нужно разбить её на поля и нормализовать. Например, из фразы "12 марта по приказу президента Владимира Путина ...", извлечём объект Person(position='президен...
from yargy import Parser from yargy.predicates import gram from yargy.pipelines import morph_pipeline from yargy.interpretation import fact from IPython.display import display Person = fact( 'Person', ['position', 'name'] ) Name = fact( 'Name', ['first', 'last'] ) POSITION = morph_pipeline([ 'пре...
docs/index.ipynb
bureaucratic-labs/yargy
mit
Грамматики для имён собраны в репозитории Natasha Токенизатор Парсер работает с последовательностью токенов. Встроенный в Yargy токенизатор простой и предсказуемый:
from yargy.tokenizer import MorphTokenizer tokenizer = MorphTokenizer() text = '''Ростов-на-Дону Длительностью 18ч. 10мин. Яндекс.Такси π ≈ 3.1415 1 500 000$ http://vk.com ''' for line in text.splitlines(): print([_.value for _ in tokenizer(line)])
docs/index.ipynb
bureaucratic-labs/yargy
mit
Для каждого токена Pymorph2 возвращает набор граммем. Например, "NOUN, sing, femn" — "существительное в единственном числе женского рода". Полный список в <a href="https://pymorphy2.readthedocs.io/en/latest/user/grammemes.html">документации Pymorph2</a>. Вне контекста слово имеет несколько вариантов разбора. Например, ...
tokenizer = MorphTokenizer() list(tokenizer('марки стали'))
docs/index.ipynb
bureaucratic-labs/yargy
mit
Токенизатор работает на правилах. В <a href="ref.ipynb#Токенизатор">справочнике</a> показано, как менять стандартные правила и добавлять новые. Предикаты Предикат принимает токен, возвращает True или False. В Yargy встроен <a href="ref.ipynb#Предикаты">набор готовых предикатов</a>. Операторы and_, or_ и not_ комбинирую...
from yargy import and_, not_ from yargy.tokenizer import MorphTokenizer from yargy.predicates import is_capitalized, eq tokenizer = MorphTokenizer() token = next(tokenizer('Стали')) predicate = is_capitalized() assert predicate(token) == True predicate = and_( is_capitalized(), not_(eq('марки')) ) assert pr...
docs/index.ipynb
bureaucratic-labs/yargy
mit
<a href="ref.ipynb#predicates.custom">custom</a> создаёт предикат из произвольной функции. Например, предикат для римских цифр:
from pymorphy2.shapes import is_roman_number from yargy.parser import Context from yargy.tokenizer import Tokenizer from yargy.predicates import custom tokenizer = Tokenizer() token = next(tokenizer('XL')) predicate = custom(is_roman_number, types='LATIN') predicate = predicate.activate(Context(tokenizer)) # прове...
docs/index.ipynb
bureaucratic-labs/yargy
mit
Газеттир Газеттир работает с последовательностью слов. Например, вместо:
from yargy import or_, rule from yargy.predicates import normalized RULE = or_( rule(normalized('dvd'), '-', normalized('диск')), rule(normalized('видео'), normalized('файл')) )
docs/index.ipynb
bureaucratic-labs/yargy
mit
удобно использовать morph_pipeline:
from yargy import Parser from yargy.pipelines import morph_pipeline RULE = morph_pipeline([ 'dvd-диск', 'видео файл', 'видеофильм', 'газета', 'электронный дневник', 'эссе', ]) parser = Parser(RULE) text = 'Видео файл на dvd-диске' for match in parser.findall(text): print([_.value for _ in...
docs/index.ipynb
bureaucratic-labs/yargy
mit
Список газеттиров в <a href="ref.ipynb#Газеттир">справочнике</a>. Грамматики В Yargy контекстно-свободная грамматика описывается конструкциями Python. Например, традиционная запись грамматики размеров одежды: KEY -&gt; р. | размер VALUE -&gt; S | M | L SIZE -&gt; KEY VALUE Так она выглядит в Yargy:
from yargy import rule, or_ KEY = or_( rule('р', '.'), rule('размер') ).named('KEY') VALUE = or_( rule('S'), rule('M'), rule('L'), ).named('VALUE') SIZE = rule( KEY, VALUE ).named('SIZE') SIZE.normalized.as_bnf
docs/index.ipynb
bureaucratic-labs/yargy
mit
В Yargy терминал грамматики — предикат. Используем встроенный предикат in_, сократим запись VALUE:
from yargy.predicates import in_ VALUE = rule( in_('SML') ).named('VALUE') SIZE = rule( KEY, VALUE ).named('SIZE') SIZE.normalized.as_bnf
docs/index.ipynb
bureaucratic-labs/yargy
mit
Как быть, когда правая часть правила ссылается на левую? Например: EXPR -&gt; a | ( EXPR + EXPR ) В Python нельзя использовать необъявленные переменные. Для рекурсивных правил, есть конструкция forward:
from yargy import forward EXPR = forward() EXPR.define(or_( rule('a'), rule('(', EXPR, '+', EXPR, ')') ).named('EXPR')) EXPR.normalized.as_bnf
docs/index.ipynb
bureaucratic-labs/yargy
mit
Рекурсивные правила описывают последовательности токенов произвольной длины. Грамматика для текста в кавычках:
from yargy import not_ from yargy.predicates import eq WORD = not_(eq('»')) TEXT = forward() TEXT.define(or_( rule(WORD), rule(WORD, TEXT) )) TITLE = rule( '«', TEXT, '»' ).named('TITLE') TITLE.normalized.as_bnf
docs/index.ipynb
bureaucratic-labs/yargy
mit
Для удобства в Yargy есть метод repeatable с ним запись короче. Библиотека автоматически добавит forward:
TITLE = rule( '«', not_(eq('»')).repeatable(), '»' ).named('TITLE') TITLE.normalized.as_bnf
docs/index.ipynb
bureaucratic-labs/yargy
mit
Парсер У парсера есть два метода: findall и match. findall находит все непересекающиеся подстроки, которые удовлетворяют грамматике:
parser = Parser( or_( PERSON, TITLE ) ) text = 'Президент Владимир Путин в фильме «Интервью с Путиным» ..' for match in parser.findall(text): print([_.value for _ in match.tokens])
docs/index.ipynb
bureaucratic-labs/yargy
mit
match — пытается разобрать весь текст целиком:
match = parser.match('Президент Владимир Путин') print([_.value for _ in match.tokens]) match = parser.match('Президент Владимир Путин 25 мая') print(match)
docs/index.ipynb
bureaucratic-labs/yargy
mit
Интерпретация Результат работы парсера — это дерево разбора. Грамматика и деревья разбора для дат:
from IPython.display import display from yargy.predicates import ( lte, gte, dictionary ) MONTHS = { 'январь', 'февраль', 'март', 'апрель', 'мая', 'июнь', 'июль', 'август', 'сентябрь', 'октябрь', 'ноябрь', 'декабрь' } MONTH_NAME = dictionary(MONTHS) MONTH...
docs/index.ipynb
bureaucratic-labs/yargy
mit
Интерпретация — процесс преобразования дерева разбора в объект с набором полей. Для даты, например, нужно получить структуры вида Date(year=2016, month=1, day=2). Пользователь размечает дерево на вершины-атрибуты и вершины-конструкторы методом interpretation:
from yargy.interpretation import fact Date = fact( 'Date', ['year', 'month', 'day'] ) DATE = or_( rule( DAY.interpretation( Date.day ), MONTH_NAME.interpretation( Date.month ), YEAR.interpretation( Date.year ) ), ...
docs/index.ipynb
bureaucratic-labs/yargy
mit
Из размеченного дерева библиотека собирает объект:
for line in text.splitlines(): match = parser.match(line) display(match.fact)
docs/index.ipynb
bureaucratic-labs/yargy
mit
Подробнее об интерпретации в <a href="#ref.ipynb#Интерпретация">справочнике</a>. Нормализация Содержание полей фактов нужно нормировать. Например, не Date('июня', '2018'), а Date(6, 2018); не Person('президента', Name('Владимира', 'Путина')), а Person('президент', Name('Владимир', 'Путин')). В Yargy пользователь при ра...
MONTHS = { 'январь': 1, 'февраль': 2, 'март': 3, 'апрель': 4, 'мая': 5, 'июнь': 6, 'июль': 7, 'август': 8, 'сентябрь': 9, 'октябрь': 10, 'ноябрь': 11, 'декабрь': 12 } DATE = rule( DAY.interpretation( Date.day.custom(int) ), MONTH_NAME.interpretation(...
docs/index.ipynb
bureaucratic-labs/yargy
mit
Подробнее в <a href="ref.ipynb#Нормализация">справочнике</a>. Согласование Примитивная грамматика имён:
NAME = rule( gram('Name').interpretation( Name.first.inflected() ), gram('Surn').interpretation( Name.last.inflected() ) ).interpretation( Name )
docs/index.ipynb
bureaucratic-labs/yargy
mit
У неё есть две проблемы. Она срабатывает на словосочетаниях, где имя и фамилия в разных падежах:
parser = Parser(NAME) for match in parser.findall('Илье Ивановым, Павлом Семенов'): print([_.value for _ in match.tokens])
docs/index.ipynb
bureaucratic-labs/yargy
mit
Имя и фамилия приводятся к нормальной форме независимо, получается женщина "Иванов":
parser = Parser(NAME) for match in parser.findall('Сашу Иванову, Саше Иванову'): display(match.fact)
docs/index.ipynb
bureaucratic-labs/yargy
mit
В Yargy связь между словами и словосочетаниями устанавливается методом match. Для согласования по числу в match передаём number_relation, для согласования по падежу, роду и числу — gnc_relation:
from yargy.relations import gnc_relation gnc = gnc_relation() NAME = rule( gram('Name').interpretation( Name.first.inflected() ).match(gnc), gram('Surn').interpretation( Name.last.inflected() ).match(gnc) ).interpretation( Name ) parser = Parser(NAME) for match in parser.findall(...
docs/index.ipynb
bureaucratic-labs/yargy
mit
Review Before we start playing with the actual implementations let us review a couple of things about MDPs. A stochastic process has the Markov property if the conditional probability distribution of future states of the process (conditional on both past and present states) depends only upon the present state, not on...
%psource MDP
mdp.ipynb
AmberJBlue/aima-python
mit
With this we have sucessfully represented our MDP. Later we will look at ways to solve this MDP. Grid MDP Now we look at a concrete implementation that makes use of the MDP as base class. The GridMDP class in the mdp module is used to represent a grid world MDP like the one shown in in Fig 17.1 of the AIMA Book. The c...
%psource GridMDP
mdp.ipynb
AmberJBlue/aima-python
mit
Value Iteration Now that we have looked how to represent MDPs. Let's aim at solving them. Our ultimate goal is to obtain an optimal policy. We start with looking at Value Iteration and a visualisation that should help us understanding it better. We start by calculating Value/Utility for each of the states. The Value of...
%psource value_iteration
mdp.ipynb
AmberJBlue/aima-python
mit
It takes as inputs two parameters an MDP to solve and epsilon the maximum error allowed in the utility of any state. It returns a dictionary containing utilities where the keys are the states and values represent utilities. Let us solve the sequencial_decision_enviornment GridMDP.
value_iteration(sequential_decision_environment)
mdp.ipynb
AmberJBlue/aima-python
mit
Visualization for Value Iteration To illustrate that values propagate out of states let us create a simple visualisation. We will be using a modified version of the value_iteration function which will store U over time. We will also remove the parameter epsilon and instead add the number of iterations we want.
def value_iteration_instru(mdp, iterations=20): U_over_time = [] U1 = {s: 0 for s in mdp.states} R, T, gamma = mdp.R, mdp.T, mdp.gamma for _ in range(iterations): U = U1.copy() for s in mdp.states: U1[s] = R(s) + gamma * max([sum([p * U[s1] for (p, s1) in T(s, a)]) ...
mdp.ipynb
AmberJBlue/aima-python
mit
Next, we define a function to create the visualisation from the utilities returned by value_iteration_instru. The reader need not concern himself with the code that immediately follows as it is the usage of Matplotib with IPython Widgets. If you are interested in reading more about these visit ipywidgets.readthedocs.io
%matplotlib inline import matplotlib.pyplot as plt from collections import defaultdict import time def make_plot_grid_step_function(columns, row, U_over_time): '''ipywidgets interactive function supports single parameter as input. This function creates and return such a function by taking in i...
mdp.ipynb
AmberJBlue/aima-python
mit
Tabular model A basic model that can be used on tabular data Embeddings
#|export def emb_sz_rule( n_cat:int # Cardinality of a category ) -> int: "Rule of thumb to pick embedding size corresponding to `n_cat`" return min(600, round(1.6 * n_cat**0.56)) #|export def _one_emb_sz(classes, n, sz_dict=None): "Pick an embedding size for `n` depending on `classes` if not given in ...
nbs/42_tabular.model.ipynb
fastai/fastai
apache-2.0
Through trial and error, this general rule takes the lower of two values: * A dimension space of 600 * A dimension space equal to 1.6 times the cardinality of the variable to 0.56. This provides a good starter for a good embedding space for your variables. For more advanced users who wish to lean into this practice, yo...
#|export def get_emb_sz( to:(Tabular, TabularPandas), sz_dict:dict=None # Dictionary of {'class_name' : size, ...} to override default `emb_sz_rule` ) -> list: # List of embedding sizes for each category "Get embedding size for each cat_name in `Tabular` or `TabularPandas`, or populate embedding size manu...
nbs/42_tabular.model.ipynb
fastai/fastai
apache-2.0
This model expects your cat and cont variables seperated. cat is passed through an Embedding layer and potential Dropout, while cont is passed though potential BatchNorm1d. Afterwards both are concatenated and passed through a series of LinBnDrop, before a final Linear layer corresponding to the expected outputs.
emb_szs = [(4,2), (17,8)] m = TabularModel(emb_szs, n_cont=2, out_sz=2, layers=[200,100]).eval() x_cat = torch.tensor([[2,12]]).long() x_cont = torch.tensor([[0.7633, -0.1887]]).float() out = m(x_cat, x_cont) #|export @delegates(TabularModel.__init__) def tabular_config(**kwargs): "Convenience function to easily c...
nbs/42_tabular.model.ipynb
fastai/fastai
apache-2.0
Any direct setup of TabularModel's internals should be passed through here:
config = tabular_config(embed_p=0.6, use_bn=False); config
nbs/42_tabular.model.ipynb
fastai/fastai
apache-2.0
RandomForest Parameter Tuning ================================================================================================================= * Find optimal parameters through grid search and compare them with those obtained by greedy method * Among numerous scoring types such as recall, accuracy, precision, f1 score...
# baseline confirmation, implying that model has to perform at least as good as it from sklearn.dummy import DummyClassifier clf_Dummy = DummyClassifier(strategy='most_frequent') clf_Dummy = clf_Dummy.fit(X_train, y_train) print('baseline score =>', round(clf_Dummy.score(X_test, y_test), 2))
3_model selection_evalutation.ipynb
higee/amazon-helpful-review
mit
=================================================================================================================== 1. Find optimal parameter 1) Greedy Method (sequential parameter tuning) good point of sequential parameter tuing is that I can see the how certain parameter affects the performance. ...however, I foun...
from sklearn.metrics import recall_score from sklearn.ensemble import RandomForestClassifier from matplotlib.pyplot import axvline, axhline recall_range = [] n_estimator_range = [] for i in np.arange(10, 20, 1): clf_RF = RandomForestClassifier(oob_score=True, n_estimators=i).fit(X_train, y_train) clf_RF_predi...
3_model selection_evalutation.ipynb
higee/amazon-helpful-review
mit
(2) max features
recall_range = [] max_features_range = [] for i in np.arange(1, 15, 1): clf_RF = RandomForestClassifier(oob_score=True, n_estimators=18, max_features=i).fit(X_train, y_train) clf_RF_predicted = clf_RF.predict(X_test) recall = round(recall_score(clf_RF_predicted, y_test), 2) max_features_range.appe...
3_model selection_evalutation.ipynb
higee/amazon-helpful-review
mit
(3) min sample leaf
recall_range = [] min_samples_leaf_range = [] for i in np.arange(1, 20, 1): clf_RF = RandomForestClassifier(oob_score=True, n_estimators=18, max_features=14, min_samples_leaf=i).fit(X_train, y_train) clf_RF_predicted = clf_RF.predict(X_test) recall = round(recall_score(clf_RF_predicted, y_test), 2) ...
3_model selection_evalutation.ipynb
higee/amazon-helpful-review
mit
2) Grid Search very efficient way to find optimal parameter values assigning 'appropriate' value range seems essential in grid search. For instance, I set the value range for 'max_features' to 10 to 28, since I wanted the model to be robust, at the expense of performance to some extent.
from sklearn.pipeline import Pipeline pipeline_clf_train = Pipeline( steps=[ ('clf_RF', RandomForestClassifier()), ] ); from sklearn.grid_search import GridSearchCV parameters = { 'clf_RF__min_samples_leaf' : np.arange(1, 28, 1), 'clf_RF__max_features' : np.arange(10, 28, 1), 'clf_RF__crite...
3_model selection_evalutation.ipynb
higee/amazon-helpful-review
mit
2. Take a look at confusion matrix (what is confusion matrix?) 1) Greedy method focal point of confusion matrix in this project is 'helpful' + 'recall' with parameters obtained from greed method, I got 0.68/1.0 meaning that out of 129 'helpful reviews' I predicted 88 reviews correctly.
clf_RF = RandomForestClassifier(n_estimators=18, max_features=14, min_samples_leaf=9, oob_score=True).fit(X_train, y_train) clf_RF_predicted = clf_RF.predict(X_test) from sklearn.metrics import classification_report, confusion_matrix target_names = ['not helpful', 'helpful'] print(classification_report(y_test, clf_RF_...
3_model selection_evalutation.ipynb
higee/amazon-helpful-review
mit
2) Grid Search compared to the above, performance is better (0.75/1.0)
clf_RF = RandomForestClassifier(n_estimators=10, max_features=19, min_samples_leaf=27, criterion='entropy', oob_score=True).fit(X_train, y_train) clf_RF_predicted = clf_RF.predict(X_test) from sklearn.metrics import classification_report, confusion_matrix target_names = ['not helpful', 'helpful'] print(classification_...
3_model selection_evalutation.ipynb
higee/amazon-helpful-review
mit