markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
We compute the Backward error, which is small.
np.linalg.norm(b-np.dot(A,x))
SC1v2/04a_linear_systems_of_equations.ipynb
tclaudioe/Scientific-Computing
bsd-3-clause
We now compute the Forward error. This is only possible since we know the exact solution by construction.
np.linalg.norm(x-x_exact) comparing_solutions=np.zeros((n,2)) comparing_solutions[:,0]=x_exact comparing_solutions[:,1]=x df=pd.DataFrame(comparing_solutions, columns=["Exact solution", "Numerical solution"]) df.style.set_caption("Comparison between the exact solution and the NumPy-solve's solution") plt.figure(figsize=(8,8)) plt.plot(x,'.',label='Numerical solution') plt.plot(x_exact,'r.', label='Exact solution') plt.grid(True) plt.xlabel('$i$') plt.legend(loc='best') plt.show()
SC1v2/04a_linear_systems_of_equations.ipynb
tclaudioe/Scientific-Computing
bsd-3-clause
<div id='classnotesPaluExample' /> Motivation for PALU in classnotes example Back to TOC
A=np.array([[1e-20, 1],[1,2]]) l,u=lu_decomp(A, show=True) np.dot(l,u) p,l,u=palu_decomp(A, show=True)
SC1v2/04a_linear_systems_of_equations.ipynb
tclaudioe/Scientific-Computing
bsd-3-clause
<div id='iterativeMethodExample' /> Iterative Methods classnotes example Back to TOC
A=np.array([[5.,4.],[1.,3.]]) b=np.array([6.,-1.]) np_sol = np.linalg.solve(A,b) jac_sol = jacobi(A, b, n_iter=50) jac_err = error(jac_sol, np_sol) gauss_seidel_sol = gauss_seidel(A, b, n_iter=50) gauss_seidel_err = error(gauss_seidel_sol, np_sol) sor_sol = sor(A, b, w=1.09310345, n_iter=50) sor_err = error(sor_sol, np_sol) plt.figure(figsize=(12,6)) plt.semilogy(it, jac_err, marker='o', linestyle='--', color='b', label='Jacobi') plt.semilogy(it, gauss_seidel_err, marker='o', linestyle='--', color='r', label='Gauss-Seidel') plt.semilogy(it, sor_err, marker='o', linestyle='--', color='g', label='SOR') plt.grid(True) plt.xlabel('Iterations') plt.ylabel('Error') plt.title('Infinity norm error for all methods') plt.legend(loc=0) plt.show() log10_errors=np.zeros((50,3)) log10_errors[:,0]=jac_err log10_errors[:,1]=gauss_seidel_err log10_errors[:,2]=sor_err log10_errors=np.log10(log10_errors[:10,:]) #np.shape(log10_errors) df=pd.DataFrame(log10_errors, columns=["Jacobi", "Gauss-Seidel","SOR($\omega$)"]) df.style.set_caption("Log10 of error for each method indicated")
SC1v2/04a_linear_systems_of_equations.ipynb
tclaudioe/Scientific-Computing
bsd-3-clause
Step 2: Model the data The data consists of the description of the products (the demand, the inside and outside costs, and the resource consumption) and the capacity of the various resources.
products = [("kluski", 100, 0.6, 0.8), ("capellini", 200, 0.8, 0.9), ("fettucine", 300, 0.3, 0.4)] # resources are a list of simple tuples (name, capacity) resources = [("flour", 20), ("eggs", 40)] consumptions = {("kluski", "flour"): 0.5, ("kluski", "eggs"): 0.2, ("capellini", "flour"): 0.4, ("capellini", "eggs"): 0.4, ("fettucine", "flour"): 0.3, ("fettucine", "eggs"): 0.6}
examples/mp/jupyter/pasta_production.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Step 3: Prepare the data Data is very simple and is ready to use without any cleasning, massage, refactoring. Step 4: Set up the prescriptive model Create the DOcplex model The model contains all the business constraints and defines the objective. We now use CPLEX Modeling for Python to build a Mixed Integer Programming (MIP) model for this problem.
from docplex.mp.model import Model mdl = Model(name="pasta")
examples/mp/jupyter/pasta_production.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Define the decision variables
inside_vars = mdl.continuous_var_dict(products, name='inside') outside_vars = mdl.continuous_var_dict(products, name='outside')
examples/mp/jupyter/pasta_production.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Express the business constraints Each product can be produced either inside the company or outside, at a higher cost. The inside production is constrained by the company's resources, while outside production is considered unlimited.
# --- constraints --- # demand satisfaction mdl.add_constraints((inside_vars[prod] + outside_vars[prod] >= prod[1], 'ct_demand_%s' % prod[0]) for prod in products) # --- resource capacity --- mdl.add_constraints((mdl.sum(inside_vars[p] * consumptions[p[0], res[0]] for p in products) <= res[1], 'ct_res_%s' % res[0]) for res in resources) mdl.print_information()
examples/mp/jupyter/pasta_production.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Express the objective Minimizing the production cost for a number of products while satisfying customer demand
total_inside_cost = mdl.sum(inside_vars[p] * p[2] for p in products) total_outside_cost = mdl.sum(outside_vars[p] * p[3] for p in products) mdl.minimize(total_inside_cost + total_outside_cost)
examples/mp/jupyter/pasta_production.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Solve with Decision Optimization Now we have everything we need to solve the model, using Model.solve(). The following cell solves using your local CPLEX (if any, and provided you have added it to your PYTHONPATH variable).
mdl.solve()
examples/mp/jupyter/pasta_production.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Step 5: Investigate the solution and then run an example analysis
obj = mdl.objective_value print("* Production model solved with objective: {:g}".format(obj)) print("* Total inside cost=%g" % total_inside_cost.solution_value) for p in products: print("Inside production of {product}: {ins_var}".format(product=p[0], ins_var=inside_vars[p].solution_value)) print("* Total outside cost=%g" % total_outside_cost.solution_value) for p in products: print("Outside production of {product}: {out_var}".format(product=p[0], out_var=outside_vars[p].solution_value))
examples/mp/jupyter/pasta_production.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Configure the dataset for performance These are two important methods you should use when loading data to make sure that I/O does not become blocking. .cache() keeps data in memory after it's loaded off disk. This will ensure the dataset does not become a bottleneck while training your model. If your dataset is too large to fit into memory, you can also use this method to create a performant on-disk cache, which is more efficient to read than many small files. .prefetch() overlaps data preprocessing and model execution while training. You can learn more about both methods, as well as how to cache data to disk in the data performance guide.
AUTOTUNE = tf.data.experimental.AUTOTUNE train_ds = train_ds.cache().prefetch(buffer_size=AUTOTUNE) val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
courses/machine_learning/deepdive2/text_classification/solutions/word_embeddings.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Using the Embedding layer Keras makes it easy to use word embeddings. Take a look at the Embedding layer. The Embedding layer can be understood as a lookup table that maps from integer indices (which stand for specific words) to dense vectors (their embeddings). The dimensionality (or width) of the embedding is a parameter you can experiment with to see what works well for your problem, much in the same way you would experiment with the number of neurons in a Dense layer.
# Embed a 1,000 word vocabulary into 5 dimensions. # TODO embedding_layer = tf.keras.layers.Embedding(1000, 5)
courses/machine_learning/deepdive2/text_classification/solutions/word_embeddings.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Create a classification model Use the Keras Sequential API to define the sentiment classification model. In this case it is a "Continuous bag of words" style model. * The TextVectorization layer transforms strings into vocabulary indices. You have already initialized vectorize_layer as a TextVectorization layer and built it's vocabulary by calling adapt on text_ds. Now vectorize_layer can be used as the first layer of your end-to-end classification model, feeding transformed strings into the Embedding layer. * The Embedding layer takes the integer-encoded vocabulary and looks up the embedding vector for each word-index. These vectors are learned as the model trains. The vectors add a dimension to the output array. The resulting dimensions are: (batch, sequence, embedding). The GlobalAveragePooling1D layer returns a fixed-length output vector for each example by averaging over the sequence dimension. This allows the model to handle input of variable length, in the simplest way possible. The fixed-length output vector is piped through a fully-connected (Dense) layer with 16 hidden units. The last layer is densely connected with a single output node. Caution: This model doesn't use masking, so the zero-padding is used as part of the input and hence the padding length may affect the output. To fix this, see the masking and padding guide.
embedding_dim=16 # TODO model = Sequential([ vectorize_layer, Embedding(vocab_size, embedding_dim, name="embedding"), GlobalAveragePooling1D(), Dense(16, activation='relu'), Dense(1) ])
courses/machine_learning/deepdive2/text_classification/solutions/word_embeddings.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Compile and train the model Create a tf.keras.callbacks.TensorBoard.
# TODO tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir="logs")
courses/machine_learning/deepdive2/text_classification/solutions/word_embeddings.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Compile and train the model using the Adam optimizer and BinaryCrossentropy loss.
# TODO model.compile(optimizer='adam', loss=tf.keras.losses.BinaryCrossentropy(from_logits=True), metrics=['accuracy']) model.fit( train_ds, validation_data=val_ds, epochs=10, callbacks=[tensorboard_callback])
courses/machine_learning/deepdive2/text_classification/solutions/word_embeddings.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
With this approach the model reaches a validation accuracy of around 84% (note that the model is overfitting since training accuracy is higher). Note: Your results may be a bit different, depending on how weights were randomly initialized before training the embedding layer. You can look into the model summary to learn more about each layer of the model.
model.summary()
courses/machine_learning/deepdive2/text_classification/solutions/word_embeddings.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Visualize the model metrics in TensorBoard.
!tensorboard --bind_all --port=8081 --logdir logs
courses/machine_learning/deepdive2/text_classification/solutions/word_embeddings.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Run the following command in Cloud Shell: <code>gcloud beta compute ssh --zone &lt;instance-zone&gt; &lt;notebook-instance-name&gt; --project &lt;project-id&gt; -- -L 8081:localhost:8081</code> Make sure to replace &lt;instance-zone&gt;, &lt;notebook-instance-name&gt; and &lt;project-id&gt;. In Cloud Shell, click Web Preview > Change Port and insert port number 8081. Click Change and Preview to open the TensorBoard. To quit the TensorBoard, click Kernel > Interrupt kernel. Retrieve the trained word embeddings and save them to disk Next, retrieve the word embeddings learned during training. The embeddings are weights of the Embedding layer in the model. The weights matrix is of shape (vocab_size, embedding_dimension). Obtain the weights from the model using get_layer() and get_weights(). The get_vocabulary() function provides the vocabulary to build a metadata file with one token per line.
# TODO weights = model.get_layer('embedding').get_weights()[0] vocab = vectorize_layer.get_vocabulary()
courses/machine_learning/deepdive2/text_classification/solutions/word_embeddings.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Two files will created as vectors.tsv and metadata.tsv. Download both files.
try: from google.colab import files files.download('vectors.tsv') files.download('metadata.tsv') except Exception as e: pass
courses/machine_learning/deepdive2/text_classification/solutions/word_embeddings.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
The append() method takes whatever argument I supply to the function, and inserts it into the next available position in the list. Which, in this case, is the very first position (since the list was previously empty, and lists are ordered). What does our list look like now?
print(x)
lectures/L4.ipynb
eds-uga/csci1360e-su17
mit
In this code example, I've used the number 1 as an index to y. In doing so, I took out the value at index 1 and put it into a variable named first_element. I then printed it, as well as the list y, and voi-- --wait, "2" is the second element. o_O Python and its spiritual progenitors C and C++ are known as zero-indexed languages. This means when you're dealing with lists or arrays, the index of the first element is always 0. This stands in contrast with languages such as Julia and Matlab, where the index of the first element of a list or array is, indeed, 1. Preference for one or the other tends to covary with whatever you were first taught, though in scientific circles it's generally preferred that languages be 0-indexed$^{[\text{citation needed}]}$. So what is in the 0 index of our list?
print(y[0]) print(y)
lectures/L4.ipynb
eds-uga/csci1360e-su17
mit
Notice the first thing printed is "1". The next line is the entire list (the output of the second print statement). This little caveat is usually the main culprit of errors for new programmers. Give yourself some time to get used to Python's 0-indexed lists. You'll see what I mean when we get to loops. In addition to elements 0 and 1, we can also directly index elements at the end of the list.
print(y[-1]) print(y)
lectures/L4.ipynb
eds-uga/csci1360e-su17
mit
So, if we're slicing "from 0 to 3", why is this including index 0 but excluding index 3? When you slice an array, the first (starting) index is inclusive; the second (ending) index, however, is exclusive. In mathematical notation, it would look something like this: $[ starting : ending )$ Therefore, the end index is one after the last index you want to keep. One more thing about lists You don't always have to start with empty lists. You can pre-define a full list; just use brackets!
z = [42, 502.4, "some string", 0] print(z)
lectures/L4.ipynb
eds-uga/csci1360e-su17
mit
Can't do that with tuples, sorry. Tuples are immutable, meaning you can't change them once you've built them.
y[2] = "does this work?"
lectures/L4.ipynb
eds-uga/csci1360e-su17
mit
There are certain situations where this can be very useful. It should be noted that sets can actually be built from lists, so you can build a list and then turn it into a set:
x = [1, 2, 3, 3] s = set(x) # Take the list x and "cast" it as a set. print(s)
lectures/L4.ipynb
eds-uga/csci1360e-su17
mit
If you want to add elements to a set, you can use the add method. If you want to remove elements from a set, you can use the discard or remove methods. But you can't index or slice a set. This is because sets do not preserve any notion of ordering. Think of them like a "bag" of elements: you can add or remove things from the bag, but you can't really say "this thing comes before or after this other thing." So why is a set useful? It's useful for checking if you've seen a particular kind of thing at least once.
s = set([1, 3, 6, 2, 5, 8, 8, 3, 2, 3, 10]) print(10 in s) # Basically asking: is 10 in our set? print(11 in s)
lectures/L4.ipynb
eds-uga/csci1360e-su17
mit
Aside: the in keyword is wonderful. It's a great way of testing if a variable is in your collection (list, set, or tuple) without having to loop over the entire collection looking for it yourself (which we'll see how to do in a bit). Part 3: Dictionaries Dictionaries deserve a section all to themselves. Are you familiar with key-value stores? Associative arrays? Hash maps? The basic idea of all these data type abstractions is to map a key to a value, in such a way that if you have a certain key, you always get back the value associated with that key. You can also think of dictionaries as unordered lists with more interesting indices. A few important points on dictionaries before we get into examples: Mutable. Dictionaries can be changed and updated. Unordered. Elements in dictionaries have no concept of ordering. Keys are distinct. The keys of dictionaries are unique; no key is ever copied. The values, however, can be copied as many times as you want. Dictionaries are created using the dict() method, or using curly braces:
d = dict() # Or... d = {}
lectures/L4.ipynb
eds-uga/csci1360e-su17
mit
Yes, you use strings as keys! In this way, you can treat dictionaries as "look up" tables--maybe you're storing information on people in a beta testing program. You can store their information by name:
d["shannon_quinn"] = ["some", "personal", "information"] print(d)
lectures/L4.ipynb
eds-uga/csci1360e-su17
mit
(it's basically the entire dictionary, but this method is useful for looping) Speaking of looping... Part 4: Loops Looping is an absolutely essential component in general programming and data science. Whether you're designing a web app or a machine learning model, most of the time you have no idea how much data you're going to be working with. 100? 1000? 952,458,482,789? In all cases, you're going to want to run the same code on each one. This is where computers excel: repetitive tasks on large amounts of information. There's a way of doing this in programming languages: loops. Let's define for ourselves the following list:
ages = [21, 22, 19, 19, 22, 21, 22, 31]
lectures/L4.ipynb
eds-uga/csci1360e-su17
mit
But that's the problem mentioned earlier: how do you even know how many elements are in your list when your program runs on a web server? If it only had 3 elements the above code would work fine, but the moment your list has 2 items or 4 items, your code would need to be rewritten. for loops One type of loop, the "bread-and-butter" loop, is the for loop. There are three basic ingredients: some collection of "things" to iterate over a placeholder for the current "thing" (because the loop only works on 1 thing at a time) a chunk of code describing what to do with the current "thing" Remember these three points every. time. you start writing a loop. They will help guide your intuition for how to code it up. Let's start simple: looping through a list, printing out each item one at a time.
the_list = [2, 5, 7, 9] for N in the_list: # Header print(N, end = " ") # Body
lectures/L4.ipynb
eds-uga/csci1360e-su17
mit
There are two main parts to the loop: the header and the body. The header contains 1) the collection we're iterating over (e.g., the list), and 2) the "placeholder" we're using to hold the current value (e.g., N). The body is the chunk of code under the header (indented!) that executes on each element of the collection, one at a time. Go over these last two slides until you understand what's happening! These are absolutely critical to creating arbitrary loops. Back, then, to computing an average:
age_sum = 0 # Setting up a variable to store the sum of all the elements in our list. ages = [21, 22, 19, 19, 22, 21, 22, 31] # Here's our list. # Summing up everything in a list is actually pretty easy, code-wise: for age in ages: # For each individual element (stored as "age") in the list "ages"... age_sum += age # Increment our variable "age_sum" by the quantity stored in "age" avg = age_sum / number_of_elements # Compute the average using the formula we know and love! print("Average age: {:.2f}".format(avg))
lectures/L4.ipynb
eds-uga/csci1360e-su17
mit
If what's happening isn't clear, go back over it. If it still isn't clear, go over it again. If it still isn't clear, please ask! You can loop through sets and tuples the same way.
s = set([1, 1, 2, 3, 5]) for item in s: print(item, end = " ") t = tuple([1, 1, 2, 3, 5]) for item in t: print(item, end = " ")
lectures/L4.ipynb
eds-uga/csci1360e-su17
mit
Important: INDENTATION MATTERS. You'll notice in these loops that the loop body is distinctly indented relative to the loop header. This is intentional and is indeed how it works! If you fail to indent the body of the loop, Python will complain:
some_list = [3.14159, "random stuff", 4200] for item in some_list: print(item)
lectures/L4.ipynb
eds-uga/csci1360e-su17
mit
With loops, whitespace in Python really starts to matter. If you want many things to happen inside of a loop (and we'll start writing longer loops!), you'll need to indent every line! while loops "While" loops go back yet again to the concept of boolean logic we introduced in an earlier lecture: loop until some condition is reached. The structure here is a little different than for loops. Instead of explicitly looping over an iterator, you'll set some condition that evaluates to either True or False; as long as the condition is True, Python executes another loop.
x = 10 while x < 15: print(x, end = " ") x += 1 # TAKE NOTE OF THIS LINE.
lectures/L4.ipynb
eds-uga/csci1360e-su17
mit
x &lt; 15 is a boolean statement: it is either True or False, depending on the value of x. Initially, this number is 10, which is certainly &lt; 15, so the loop executes. 10 is printed, x is incremented, and the condition is checked again. A potential downside of while loops: forgetting to update the condition inside the loop. It's easy to take for granted; for loops implicitly handle updating the loop variable for us.
for i in range(10, 15): print(i, end = " ") # No update needed!
lectures/L4.ipynb
eds-uga/csci1360e-su17
mit
Otra forma de ver las caminatas aleatorias es mediante la distribución de Rayleigh, pero para esto, uno se debe familiarizar uno con la distribución normal. Esta es una distribución muy famosa acreditada a Karl Gauss y la cual aparece en todos los libros de texto porque es una distribución muy importante para los procesos de la vida cotidiana. La FDP de la distribución normal está dada por $$f\left(x\ \vert\ \mu,\sigma^2\right) = \frac{1}{\sigma \sqrt{2\pi}} \exp\left({- \frac{(x - \mu)^2}{2\sigma^2}}\right)$$ Entonces, una caminata aleatoria está dada por la FDP de la distribución de Rayleigh acreditada a Lord Rayleigh y la cual está basada en la distribución normal, el hecho es que esta nueva distribución de Rayleigh toma en cuenta ahora la probabilidad de moverse al rededor del espacio, y por eso se tiene la $n$ en la FDP. La FDP se ve de la siguiente forma. $$ f\left(r_n\right) = \frac{r_n}{n \sigma^2} \exp\left(-\frac{r_n^2}{2n \sigma^2}\right) $$ ¿Cómo se ven las caminatas aleatorias? Algunas de estas FDP (distribución de Rayleigh) pueden ser graficadas dado un número limitado de pasos, partículas y posiciones, como se ve a continuación.
%matplotlib inline import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D def rayleigh(x0, n, dt, delta): x0 = np.asfarray(x0) shp = (n+1,) + x0.shape # Utilizar distribución normal como "plantilla" para generar la de Rayleigh r = np.random.normal(size=shp, scale=delta*np.sqrt(dt)) r[0] = 0.0 x = r.cumsum(axis=0) x += x0 return x xinicial = np.zeros(1) # número de partículas n = 500 dt = 10.0 delta = 0.25 xini = np.array(rayleigh(xinicial, n, dt, delta)) yini = np.array(rayleigh(xinicial, n, dt, 0.25)) zini = np.array(rayleigh(xinicial, n, dt, 0.2)) # *--- Plot en 3D ---* # Crear mapa de colores number = 1 # número de color por partícula cmap = plt.get_cmap('gnuplot') colors = [cmap(i) for i in np.linspace(0, 1, number)] fig = plt.figure() ax = fig.add_subplot(111, projection='3d') for i, color in enumerate(colors, start=1): xini = np.array(rayleigh(xinicial, n, dt, i)) ax.scatter(xini, yini, zini, color=color) plt.show()
proyecto-final.ipynb
developEdwin/Proyecto-Browniano
gpl-3.0
Representación matemática del movimiento browniano Muchos matemáticos intentaron describir y darle solución al movimiento browniano y el comportamiento de los gases dentro de contenedores, ¿porqué se mueven así? ¿Porqué tienen patrones muy similares? No fue hasta que Einstein trató de darle solución a este problema, y luego corroborarse mediante el trabajo de Perrin. Einstein lo también lo vio desde la perspectiva de la probabilidad, pero desde un punto de vista de la difusión de la concentración de una sustancia dentro de un contenedor. Einstein imaginó que la sustancia iba a cambiar de posición de acuerdo a la proporción de concentración molar con respecto a un volumen, entonces, quiso determinar la posición de estas partículas (llamadas partículas brownianas) mediante su desplazamiento cuadrático medio, hizo todo los cálculos necesarios para llegar a la forma $$ \rho\left(x,t\right) = \frac{N}{\sqrt{4 \pi D t}} \exp\left(- \frac{x^2}{4Dt}\right) $$ Esta FDP se puede simplificar mediante el uso del proceso de Wiener, descrito con la forma $$ f_{W_t}\left(x\right) = \frac{1}{\sqrt{2 \pi t}} \exp{- \frac{x^2}{2t}} ,$$ que se puede ver en el código siguiente.
%matplotlib inline from scipy.stats import norm import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D def brownian(x0, n, dt, delta, out=None): x0 = np.asarray(x0) r = norm.rvs(size=x0.shape + (n,), scale=delta*np.sqrt(dt)) if out is None: out = np.empty(r.shape) np.cumsum(r, axis=1, out=out) out += np.expand_dims(x0, axis=-1) return out def main(): delta = 0.25 T = 10.0 N = 1000 dt = T/N x = np.empty((3,N+1)) x[:, 0] = np.zeros(3) for _ in xrange(500): brownian(x[:, 0], N, dt, delta, out=x[:,1:]) #plot in 2D plt.plot(x[0], x[1]) plt.title('Movimiento browniano 2D') plt.axis('equal') plt.grid(True) plt.show() #plot in 3D fig = plt.figure() ax = fig.add_subplot(111, projection='3d') ax.plot(x[0], x[1],x[2]) plt.title('Movimiento browniano en 3D') if __name__ == "__main__": main()
proyecto-final.ipynb
developEdwin/Proyecto-Browniano
gpl-3.0
Konwolucje w sieciach neuronowych (1D) <img style="margin: auto" width="80%" src="http://colah.github.io/posts/2014-07-Conv-Nets-Modular/img/Conv-9-xs.png"/> <img style="margin: auto" width="80%" src="http://colah.github.io/posts/2014-07-Conv-Nets-Modular/img/Conv-9-F.png"/> <img style="margin: auto" width="80%" src="http://colah.github.io/posts/2014-07-Conv-Nets-Modular/img/Conv-9-Conv2.png"/> <img style="margin: auto" width="80%" src="http://colah.github.io/posts/2014-07-Conv-Nets-Modular/img/Conv-9-Conv3.png"/> <img style="margin: auto" width="80%" src="http://colah.github.io/posts/2014-07-Conv-Nets-Modular/img/Conv-9-Conv2Conv2.png"/> <img style="margin: auto" width="80%" src="http://colah.github.io/posts/2014-07-Conv-Nets-Modular/img/Conv-9-Conv2Max2Conv2.png"/> Warstwa konwolucyjna w Keras (1D)
from keras.models import Sequential from keras.layers.core import Dense, Dropout, Activation, Flatten from keras.layers.embeddings import Embedding from keras.layers.convolutional import Convolution1D, MaxPooling1D model = Sequential() model.add(Embedding(5000, 100, input_length=100)) model.add(Dropout(0.25)) model.add(Convolution1D(nb_filter=250, filter_length=3, border_mode='valid', activation='relu', subsample_length=1)) model.add(MaxPooling1D(pool_length=2)) from __future__ import print_function import numpy as np np.random.seed(1337) # for reproducibility from keras.preprocessing import sequence from keras.models import Sequential from keras.layers.core import Dense, Dropout, Activation, Flatten from keras.layers.embeddings import Embedding from keras.layers.convolutional import Convolution1D, MaxPooling1D from keras.datasets import imdb # set parameters: max_features = 5000 maxlen = 100 batch_size = 32 embedding_dims = 100 nb_filter = 250 filter_length = 3 hidden_dims = 250 nb_epoch = 2 print('Loading data...') (X_train, y_train), (X_test, y_test) = imdb.load_data(nb_words=max_features, test_split=0.2) print(len(X_train), 'train sequences') print(len(X_test), 'test sequences') print('Pad sequences (samples x time)') X_train = sequence.pad_sequences(X_train, maxlen=maxlen) X_test = sequence.pad_sequences(X_test, maxlen=maxlen) print('X_train shape:', X_train.shape) print('X_test shape:', X_test.shape) print('Build model...') model = Sequential() # we start off with an efficient embedding layer which maps # our vocab indices into embedding_dims dimensions model.add(Embedding(max_features, embedding_dims, input_length=maxlen)) model.add(Dropout(0.25)) # we add a Convolution1D, which will learn nb_filter # word group filters of size filter_length: model.add(Convolution1D(nb_filter=nb_filter, filter_length=filter_length, border_mode='valid', activation='relu', subsample_length=1)) # we use standard max pooling (halving the output of the previous layer): model.add(MaxPooling1D(pool_length=2)) # We flatten the output of the conv layer, # so that we can add a vanilla dense layer: model.add(Flatten()) # We add a vanilla hidden layer: model.add(Dense(hidden_dims)) model.add(Dropout(0.25)) model.add(Activation('relu')) # We project onto a single unit output layer, and squash it with a sigmoid: model.add(Dense(1)) model.add(Activation('sigmoid')) model.compile(loss='binary_crossentropy', optimizer='rmsprop', class_mode='binary') model.fit(X_train, y_train, batch_size=batch_size, nb_epoch=nb_epoch, show_accuracy=True, validation_data=(X_test, y_test))
Wyklady/12/Deep Learning.ipynb
emjotde/UMZ
cc0-1.0
Konwolucje w przetwarzaniu obrazów <img style="margin: auto" width="60%" src="http://devblogs.nvidia.com/parallelforall/wp-content/uploads/sites/3/2015/11/convolution.png"/> <img style="margin: auto" width="60%" src="http://devblogs.nvidia.com/parallelforall/wp-content/uploads/sites/3/2015/11/Convolution_schematic.gif"/> Konwolucje 2D formalnie $$ \left[\begin{array}{ccc} a & b & c\ d & e & f\ g & h & i\ \end{array}\right] * \left[\begin{array}{ccc} 1 & 2 & 3\ 3 & 4 & 5\ 6 & 7 & 8\ \end{array}\right] =\ (1 \cdot i)+(2\cdot h)+(3\cdot g)+(4 \cdot f)+(5\cdot e)\+(6\cdot d)+(7\cdot c)+(8\cdot b)+(9\cdot a) $$ Więcej: https://en.wikipedia.org/wiki/Kernel_(image_processing) Konwolucje w sieciach neuronowych (2D) <img style="margin: auto" width="30%" src="http://colah.github.io/posts/2014-07-Conv-Nets-Modular/img/Conv2-unit.png"/> <img style="margin: auto" width="80%" src="http://colah.github.io/posts/2014-07-Conv-Nets-Modular/img/Conv2-9x5-Conv2.png"/> <img style="margin: auto" width="80%" src="http://colah.github.io/posts/2014-07-Conv-Nets-Modular/img/Conv2-9x5-Conv2Conv2.png"/> <img style="margin: auto" width="80%" src="http://colah.github.io/posts/2014-07-Conv-Nets-Modular/img/Conv2-9x5-Conv2Max2Conv2.png"/> Struktura jednostki konwolucyjnej <img style="margin: auto" width="60%" src="http://colah.github.io/posts/2014-07-Conv-Nets-Modular/img/Conv-A.png"/> <img style="margin: auto" width="60%" src="http://colah.github.io/posts/2014-07-Conv-Nets-Modular/img/Conv-A-NIN.png"/> Więcej na: http://colah.github.io/posts/2014-07-Conv-Nets-Modular ImageNet: Alex Krizhevsky, Ilya Sutskever, Geoffrey Hinton (2012) <img style="margin: auto" width="60%" src="http://colah.github.io/posts/2014-07-Conv-Nets-Modular/img/KSH-arch.png"/> Krizhevsky, A., Sutskever, I. and Hinton, G. E. ImageNet Classification with Deep Convolutional Neural Networks NIPS 2012: Neural Information Processing Systems, Lake Tahoe, Nevada | Model | Top-1 | Top-5 | |---------------|-------|-------| | Sparse Coding | 47.1% | 28.2% | | SIFT+FVs | 45.7% | 25.7% | | CNN | 37.5% | 17.0% | <img style="margin: auto" width="80%" src="https://flickrcode.files.wordpress.com/2014/10/conv-net2.png"/> <img style="margin: auto" width="80%" src="http://colah.github.io/posts/2014-07-Conv-Nets-Modular/img/KSH-results.png"/> <img style="margin: auto" width="80%" src="http://colah.github.io/posts/2014-07-Conv-Nets-Modular/img/KSH-filters.png"/> Warstwa konwolucyjna i max-pooling w Keras (2D)
from keras.models import Sequential from keras.layers.core import Dense, Dropout, Activation, Flatten from keras.layers.convolutional import Convolution2D, MaxPooling2D model = Sequential() model.add(Convolution2D(32, 3, 3, border_mode='valid', input_shape=(1, 28, 28))) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(128)) model.add(Activation('relu')) model.add(Dropout(0.5)) model.add(Dense(10)) model.add(Activation('softmax'))
Wyklady/12/Deep Learning.ipynb
emjotde/UMZ
cc0-1.0
MNIST - Sieci konwolucyjne w Keras
from __future__ import print_function import numpy as np np.random.seed(1337) # for reproducibility from keras.datasets import mnist from keras.models import Sequential from keras.layers.core import Dense, Dropout, Activation, Flatten from keras.layers.convolutional import Convolution2D, MaxPooling2D from keras.utils import np_utils batch_size = 128 nb_classes = 10 nb_epoch = 12 # input image dimensions img_rows, img_cols = 28, 28 # number of convolutional filters to use nb_filters = 32 # size of pooling area for max pooling nb_pool = 2 # convolution kernel size nb_conv = 3 # the data, shuffled and split between tran and test sets (X_train, y_train), (X_test, y_test) = mnist.load_data() X_train = X_train.reshape(X_train.shape[0], 1, img_rows, img_cols) X_test = X_test.reshape(X_test.shape[0], 1, img_rows, img_cols) X_train = X_train.astype('float32') X_test = X_test.astype('float32') X_train /= 255 X_test /= 255 print('X_train shape:', X_train.shape) print(X_train.shape[0], 'train samples') print(X_test.shape[0], 'test samples') # convert class vectors to binary class matrices Y_train = np_utils.to_categorical(y_train, nb_classes) Y_test = np_utils.to_categorical(y_test, nb_classes) model = Sequential() model.add(Convolution2D(nb_filters, nb_conv, nb_conv, border_mode='valid', input_shape=(1, img_rows, img_cols))) model.add(Activation('relu')) model.add(Convolution2D(nb_filters, nb_conv, nb_conv)) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(nb_pool, nb_pool))) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(128)) model.add(Activation('relu')) model.add(Dropout(0.5)) model.add(Dense(nb_classes)) model.add(Activation('softmax')) model.compile(loss='categorical_crossentropy', optimizer='Adadelta') model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=nb_epoch, show_accuracy=True, verbose=1, validation_data=(X_test, Y_test)) score = model.evaluate(X_test, Y_test, show_accuracy=True, verbose=0) print('Test score:', score[0]) print('Test accuracy:', score[1])
Wyklady/12/Deep Learning.ipynb
emjotde/UMZ
cc0-1.0
RNN i LSTM - Przetwarzanie sekwencji <img style="margin: auto" width="20%" src="http://colah.github.io/posts/2015-08-Understanding-LSTMs/img/RNN-rolled.png"/> <img style="margin: auto" width="80%" src="http://colah.github.io/posts/2015-08-Understanding-LSTMs/img/RNN-unrolled.png"/> Zależności długodystansowe (Long-distance dependencies) <img style="margin: auto" width="60%" src="http://colah.github.io/posts/2015-08-Understanding-LSTMs/img/RNN-shorttermdepdencies.png"/> <img style="margin: auto" width="60%" src="http://colah.github.io/posts/2015-08-Understanding-LSTMs/img/RNN-longtermdependencies.png"/> LSTM: http://colah.github.io/posts/2015-08-Understanding-LSTMs/ Generowanie Tekstu (litera po literze) <img style="margin: auto" width="80%" src="http://karpathy.github.io/assets/rnn/charseq.jpeg"/> Generator ''Szekspira'' PANDARUS:<br/> Alas, I think he shall be come approached and the day<br/> When little srain would be attain'd into being never fed,<br/> And who is but a chain and subjects of his death, <br/> I should not sleep. Second Senator:<br/> They are away this miseries, produced upon my soul, <br/> Breaking and strongly should be buried, when I perish<br/> The earth and thoughts of many states.<br/> DUKE VINCENTIO:<br/> Well, your wit is in the care of side and that.<br/> Second Lord:<br/> They would be ruled after this chamber, and <br/> my fair nues begun out of the fact, to be conveyed,<br/> Whose noble souls I'll have the heart of the wars.<br/> Tłumaczenie neuronowe <img style="margin: auto" width="80%" src="http://devblogs.nvidia.com/parallelforall/wp-content/uploads/sites/3/2015/06/Figure2_NMT_system.png"/> NMT Generowanie opisów <img style="margin: auto" width="100%" src="nic_rated.jpg"/> Schemat sieci <img style="margin: auto" width="80%" src="nn_c.png"/>
from IPython.display import YouTubeVideo YouTubeVideo("8BFzu9m52sc")
Wyklady/12/Deep Learning.ipynb
emjotde/UMZ
cc0-1.0
Import Libraries
# Importing necessary tensorflow library and printing the TF version. import tensorflow as tf print("TensorFlow version: ",tf.version.VERSION) import os # Here we'll import Pandas and Numpy data processing libraries import pandas as pd import numpy as np from datetime import datetime # Use matplotlib for visualizing the model import matplotlib.pyplot as plt # Use seaborn for data visualization import seaborn as sns %matplotlib inline
courses/machine_learning/deepdive2/launching_into_ml/solutions/improve_data_quality.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Load the Dataset The dataset is based on California's Vehicle Fuel Type Count by Zip Code report. The dataset has been modified to make the data "untidy" and is thus a synthetic representation that can be used for learning purposes.
# Creating directory to store dataset if not os.path.isdir("../data/transport"): os.makedirs("../data/transport") # Download the raw .csv data by copying the data from a cloud storage bucket. !gsutil cp gs://cloud-training/mlongcp/v3.0_MLonGC/toy_data/untidy_vehicle_data_toy.csv ../data/transport # ls shows the working directory's contents. # Using the -l parameter will lists files with assigned permissions !ls -l ../data/transport
courses/machine_learning/deepdive2/launching_into_ml/solutions/improve_data_quality.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Read Dataset into a Pandas DataFrame Next, let's read in the dataset just copied from the cloud storage bucket and create a Pandas DataFrame. We also add a Pandas .head() function to show you the top 5 rows of data in the DataFrame. Head() and Tail() are "best-practice" functions used to investigate datasets.
# Reading "untidy_vehicle_data_toy.csv" file using the read_csv() function included in the pandas library. df_transport = pd.read_csv('../data/transport/untidy_vehicle_data_toy.csv') # Output the first five rows. df_transport.head()
courses/machine_learning/deepdive2/launching_into_ml/solutions/improve_data_quality.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
DataFrame Column Data Types DataFrames may have heterogenous or "mixed" data types, that is, some columns are numbers, some are strings, and some are dates etc. Because CSV files do not contain information on what data types are contained in each column, Pandas infers the data types when loading the data, e.g. if a column contains only numbers, Pandas will set that column’s data type to numeric: integer or float. Run the next cell to see information on the DataFrame.
# The .info() function will display the concise summary of an dataframe. df_transport.info()
courses/machine_learning/deepdive2/launching_into_ml/solutions/improve_data_quality.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
From what the .info() function shows us, we have six string objects and one float object. We can definitely see more of the "string" object values now!
# Let's print out the first and last five rows of each column. print(df_transport,5)
courses/machine_learning/deepdive2/launching_into_ml/solutions/improve_data_quality.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Summary Statistics At this point, we have only one column which contains a numerical value (e.g. Vehicles). For features which contain numerical values, we are often interested in various statistical measures relating to those values. Note, that because we only have one numeric feature, we see only one summary stastic - for now.
# We can use .describe() to see some summary statistics for the numeric fields in our dataframe. df_transport.describe()
courses/machine_learning/deepdive2/launching_into_ml/solutions/improve_data_quality.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Let's investigate a bit more of our data by using the .groupby() function.
# The .groupby() function is used for spliting the data into groups based on some criteria. grouped_data = df_transport.groupby(['Zip Code','Model Year','Fuel','Make','Light_Duty','Vehicles']) # Get the first entry for each month. df_transport.groupby('Fuel').first()
courses/machine_learning/deepdive2/launching_into_ml/solutions/improve_data_quality.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
What can we deduce about the data at this point? Let's summarize our data by row, column, features, unique, and missing values.
# In Python shape() is used in pandas to give the number of rows/columns. # The number of rows is given by .shape[0]. The number of columns is given by .shape[1]. # Thus, shape() consists of an array having two arguments -- rows and columns print ("Rows : " ,df_transport.shape[0]) print ("Columns : " ,df_transport.shape[1]) print ("\nFeatures : \n" ,df_transport.columns.tolist()) print ("\nUnique values : \n",df_transport.nunique()) print ("\nMissing values : ", df_transport.isnull().sum().values.sum())
courses/machine_learning/deepdive2/launching_into_ml/solutions/improve_data_quality.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Let's see the data again -- this time the last five rows in the dataset.
# Output the last five rows in the dataset. df_transport.tail()
courses/machine_learning/deepdive2/launching_into_ml/solutions/improve_data_quality.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
What Are Our Data Quality Issues? Data Quality Issue #1: Missing Values: Each feature column has multiple missing values. In fact, we have a total of 18 missing values. Data Quality Issue #2: Date DataType: Date is shown as an "object" datatype and should be a datetime. In addition, Date is in one column. Our business requirement is to see the Date parsed out to year, month, and day. Data Quality Issue #3: Model Year: We are only interested in years greater than 2006, not "<2006". Data Quality Issue #4: Categorical Columns: The feature column "Light_Duty" is categorical and has a "Yes/No" choice. We cannot feed values like this into a machine learning model. In addition, we need to "one-hot encode the remaining "string"/"object" columns. Data Quality Issue #5: Temporal Features: How do we handle year, month, and day? Data Quality Issue #1: Resolving Missing Values Most algorithms do not accept missing values. Yet, when we see missing values in our dataset, there is always a tendency to just "drop all the rows" with missing values. Although Pandas will fill in the blank space with “NaN", we should "handle" them in some way. While all the methods to handle missing values is beyond the scope of this lab, there are a few methods you should consider. For numeric columns, use the "mean" values to fill in the missing numeric values. For categorical columns, use the "mode" (or most frequent values) to fill in missing categorical values. In this lab, we use the .apply and Lambda functions to fill every column with its own most frequent value. You'll learn more about Lambda functions later in the lab. Let's check again for missing values by showing how many rows contain NaN values for each feature column.
# The isnull() method is used to check and manage NULL values in a data frame. # TODO 1a df_transport.isnull().sum()
courses/machine_learning/deepdive2/launching_into_ml/solutions/improve_data_quality.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Run the cell to apply the lambda function.
# Here we are using the apply function with lambda. # We can use the apply() function to apply the lambda function to both rows and columns of a dataframe. # TODO 1b df_transport = df_transport.apply(lambda x:x.fillna(x.value_counts().index[0]))
courses/machine_learning/deepdive2/launching_into_ml/solutions/improve_data_quality.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Let's check again for missing values.
# The isnull() method is used to check and manage NULL values in a data frame. # TODO 1c df_transport.isnull().sum()
courses/machine_learning/deepdive2/launching_into_ml/solutions/improve_data_quality.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Data Quality Issue #2: Convert the Date Feature Column to a Datetime Format
# The date column is indeed shown as a string object. We can convert it to the datetime datatype with the to_datetime() function in Pandas. # TODO 2a df_transport['Date'] = pd.to_datetime(df_transport['Date'], format='%m/%d/%Y') # Date is now converted and will display the concise summary of an dataframe. # TODO 2b df_transport.info() # Now we will parse Date into three columns that is year, month, and day. df_transport['year'] = df_transport['Date'].dt.year df_transport['month'] = df_transport['Date'].dt.month df_transport['day'] = df_transport['Date'].dt.day #df['hour'] = df['date'].dt.hour - you could use this if your date format included hour. #df['minute'] = df['date'].dt.minute - you could use this if your date format included minute. # The .info() function will display the concise summary of an dataframe. df_transport.info()
courses/machine_learning/deepdive2/launching_into_ml/solutions/improve_data_quality.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Let's confirm the Date parsing. This will also give us a another visualization of the data.
# Here, we are creating a new dataframe called "grouped_data" and grouping by on the column "Make" grouped_data = df_transport.groupby(['Make']) # Get the first entry for each month. df_transport.groupby('month').first()
courses/machine_learning/deepdive2/launching_into_ml/solutions/improve_data_quality.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Now that we have Dates as a integers, let's do some additional plotting.
# Here we will visualize our data using the figure() function in the pyplot module of matplotlib's library -- which is used to create a new figure. plt.figure(figsize=(10,6)) # Seaborn's .jointplot() displays a relationship between 2 variables (bivariate) as well as 1D profiles (univariate) in the margins. This plot is a convenience class that wraps JointGrid. sns.jointplot(x='month',y='Vehicles',data=df_transport) # The title() method in matplotlib module is used to specify title of the visualization depicted and displays the title using various attributes. plt.title('Vehicles by Month')
courses/machine_learning/deepdive2/launching_into_ml/solutions/improve_data_quality.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Data Quality Issue #3: Rename a Feature Column and Remove a Value. Our feature columns have different "capitalizations" in their names, e.g. both upper and lower "case". In addition, there are "spaces" in some of the column names. In addition, we are only interested in years greater than 2006, not "<2006". We can also resolve the "case" problem too by making all the feature column names lower case.
# Let's remove all the spaces for feature columns by renaming them. # TODO 3a df_transport.rename(columns = { 'Date': 'date', 'Zip Code':'zipcode', 'Model Year': 'modelyear', 'Fuel': 'fuel', 'Make': 'make', 'Light_Duty': 'lightduty', 'Vehicles': 'vehicles'}, inplace = True) # Output the first two rows. df_transport.head(2)
courses/machine_learning/deepdive2/launching_into_ml/solutions/improve_data_quality.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Note: Next we create a copy of the dataframe to avoid the "SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame" warning. Run the cell to remove the value '<2006' from the modelyear feature column.
# Here, we create a copy of the dataframe to avoid copy warning issues. # TODO 3b df = df_transport.loc[df_transport.modelyear != '<2006'].copy() # Here we will confirm that the modelyear value '<2006' has been removed by doing a value count. df['modelyear'].value_counts(0)
courses/machine_learning/deepdive2/launching_into_ml/solutions/improve_data_quality.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Data Quality Issue #4: Handling Categorical Columns The feature column "lightduty" is categorical and has a "Yes/No" choice. We cannot feed values like this into a machine learning model. We need to convert the binary answers from strings of yes/no to integers of 1/0. There are various methods to achieve this. We will use the "apply" method with a lambda expression. Pandas. apply() takes a function and applies it to all values of a Pandas series. What is a Lambda Function? Typically, Python requires that you define a function using the def keyword. However, lambda functions are anonymous -- which means there is no need to name them. The most common use case for lambda functions is in code that requires a simple one-line function (e.g. lambdas only have a single expression). As you progress through the Course Specialization, you will see many examples where lambda functions are being used. Now is a good time to become familiar with them.
# Lets count the number of "Yes" and"No's" in the 'lightduty' feature column. df['lightduty'].value_counts(0) # Let's convert the Yes to 1 and No to 0. # The .apply takes a function and applies it to all values of a Pandas series (e.g. lightduty). df.loc[:,'lightduty'] = df['lightduty'].apply(lambda x: 0 if x=='No' else 1) df['lightduty'].value_counts(0) # Confirm that "lightduty" has been converted. df.head()
courses/machine_learning/deepdive2/launching_into_ml/solutions/improve_data_quality.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
One-Hot Encoding Categorical Feature Columns Machine learning algorithms expect input vectors and not categorical features. Specifically, they cannot handle text or string values. Thus, it is often useful to transform categorical features into vectors. One transformation method is to create dummy variables for our categorical features. Dummy variables are a set of binary (0 or 1) variables that each represent a single class from a categorical feature. We simply encode the categorical variable as a one-hot vector, i.e. a vector where only one element is non-zero, or hot. With one-hot encoding, a categorical feature becomes an array whose size is the number of possible choices for that feature. Panda provides a function called "get_dummies" to convert a categorical variable into dummy/indicator variables.
# Making dummy variables for categorical data with more inputs. data_dummy = pd.get_dummies(df[['zipcode','modelyear', 'fuel', 'make']], drop_first=True) # Output the first five rows. data_dummy.head() # Merging (concatenate) original data frame with 'dummy' dataframe. # TODO 4a df = pd.concat([df,data_dummy], axis=1) df.head() # Dropping attributes for which we made dummy variables. Let's also drop the Date column. # TODO 4b df = df.drop(['date','zipcode','modelyear', 'fuel', 'make'], axis=1) # Confirm that 'zipcode','modelyear', 'fuel', and 'make' have been dropped. df.head()
courses/machine_learning/deepdive2/launching_into_ml/solutions/improve_data_quality.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Data Quality Issue #5: Temporal Feature Columns Our dataset now contains year, month, and day feature columns. Let's convert the month and day feature columns to meaningful representations as a way to get us thinking about changing temporal features -- as they are sometimes overlooked. Note that the Feature Engineering course in this Specialization will provide more depth on methods to handle year, month, day, and hour feature columns.
# Let's print the unique values for "month", "day" and "year" in our dataset. print ('Unique values of month:',df.month.unique()) print ('Unique values of day:',df.day.unique()) print ('Unique values of year:',df.year.unique())
courses/machine_learning/deepdive2/launching_into_ml/solutions/improve_data_quality.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Don't worry, this is the last time we will use this code, as you can develop an input pipeline to address these temporal feature columns in TensorFlow and Keras - and it is much easier! But, sometimes you need to appreciate what you're not going to encounter as you move through the course! Run the cell to view the output.
# Here we map each temporal variable onto a circle such that the lowest value for that variable appears right next to the largest value. We compute the x- and y- component of that point using the sin and cos trigonometric functions. df['day_sin'] = np.sin(df.day*(2.*np.pi/31)) df['day_cos'] = np.cos(df.day*(2.*np.pi/31)) df['month_sin'] = np.sin((df.month-1)*(2.*np.pi/12)) df['month_cos'] = np.cos((df.month-1)*(2.*np.pi/12)) # Let's drop month, and day # TODO 5 df = df.drop(['month','day','year'], axis=1) # scroll left to see the converted month and day coluumns. df.tail(4)
courses/machine_learning/deepdive2/launching_into_ml/solutions/improve_data_quality.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Wavelength Response Calculate the wavelength response functions with SunPy. EUV Channels: 94, 131, 171, 193, 211, 335, 304 $\mathrm{\mathring{A}}$
response = sunpy.instr.aia.Response(ssw_path='/Users/willbarnes/Documents/Rice/Research/ssw/') response.calculate_wavelength_response()
notebooks/sunpy_aia_response_tutorial.ipynb
wtbarnes/aia_response
mit
Peek at the wavelength response functions.
response.peek_wavelength_response()
notebooks/sunpy_aia_response_tutorial.ipynb
wtbarnes/aia_response
mit
Far UV and Visible Channels: 1600, 1700, and 4500 $\mathrm{\mathring{A}}$ We can also very easily calculate the wavelength response functions for the far UV and visible channels centered on 1600, 1700, and 4500 $\mathrm{\mathring{A}}$.
response_uv = sunpy.instr.aia.Response(ssw_path='/Users/willbarnes/Documents/Rice/Research/ssw/', channel_list=[1600,1700,4500]) response_uv.calculate_wavelength_response() response_uv.peek_wavelength_response()
notebooks/sunpy_aia_response_tutorial.ipynb
wtbarnes/aia_response
mit
Detailed Instrument Information If you'd like very detailed information about each channel on the instrument, you can also generate an astropy Table that lists all of the different instrument properties that go into the wavelength response calculation. <div class="alert alert-danger" role="alert"> <h1>Warning!</h1> <p>The following function is currently being used as a stop gap for pulling the instrument information about each channel out of the SSW .genx files. It is very likely that this functionality will change in the future.</p> </div>
table=sunpy.instr.aia.aia_instr_properties_to_table([94,131,171,193,211,335,304,1600,1700,4500], ['/Users/willbarnes/Documents/Rice/Research/ssw/sdo/aia/response/aia_V6_all_fullinst.genx', '/Users/willbarnes/Documents/Rice/Research/ssw/sdo/aia/response/aia_V6_fuv_fullinst.genx']) table
notebooks/sunpy_aia_response_tutorial.ipynb
wtbarnes/aia_response
mit
Exercises Exercise: In the book, I present a different way to parameterize the quadratic model: $ \Delta p = r p (1 - p / K) $ where $r=\alpha$ and $K=-\alpha/\beta$. Write a version of update_func that implements this version of the model. Test it by computing the values of r and K that correspond to alpha=0.025, beta=-0.0018, and confirm that you get the same results.
# Solution system = System(t_0=t_0, t_end=t_end, p_0=p_0, alpha=0.025, beta=-0.0018) system.r = system.alpha system.K = -system.alpha/system.beta system.r, system.K # Solution def update_func_quad2(pop, t, system): """Compute the population next year. pop: current population t: current year system: system object containing parameters of the model returns: population next year """ net_growth = system.r * pop * (1 - pop / system.K) return pop + net_growth # Solution results = run_simulation(system, update_func_quad2) plot_results(census, un, results, 'Quadratic model')
soln/chap07soln.ipynb
AllenDowney/ModSimPy
mit
In this section, we consider the famous Gauss-Markov problem which will give us an opportunity to use all the material we have so far developed. The Gauss-Markov model is the fundamental model for noisy parameter estimation because it estimates unobservable parameters given a noisy indirect measurement. Incarnations of the same model appear in all studies of Gaussian models. This case is an excellent opportunity to use everything we have so far learned about projection and conditional expectation. Following Luenberger [luenberger1968optimization] let's consider the following problem: $$ \mathbf{y} = \mathbf{W} \boldsymbol{\beta} + \boldsymbol{\epsilon} $$ where $\mathbf{W}$ is a $ n \times m $ matrix, and $\mathbf{y}$ is a $n \times 1$ vector. Also, $\boldsymbol{\epsilon}$ is a $n$-dimensional normally distributed random vector with zero-mean and covariance, $$ \mathbb{E}( \boldsymbol{\epsilon} \boldsymbol{\epsilon}^T) = \mathbf{Q} $$ Note that engineering systems usually provide a calibration mode where you can estimate $\mathbf{Q}$ so it's not fantastical to assume you have some knowledge of the noise statistics. The problem is to find a matrix $\mathbf{K}$ so that $ \boldsymbol{\hat{\beta}}=\mathbf{K} \mathbf{y}$ approximates $ \boldsymbol{\beta}$. Note that we only have knowledge of $\boldsymbol{\beta}$ via $ \mathbf{y}$ so we can't measure it directly. Further, note that $\mathbf{K} $ is a matrix, not a vector, so there are $m \times n$ entries to compute. We can approach this problem the usual way by trying to solve the MMSE problem: $$ \min_K\mathbb{E}(\Vert\boldsymbol{\hat{\beta}}-\boldsymbol{\beta} \Vert^2) $$ which we can write out as $$ \min_K \mathbb{E}(\Vert \boldsymbol{\hat{\beta}}-\boldsymbol{\beta} \Vert^2) = \min_K\mathbb{E}(\Vert \mathbf{K}\mathbf{y}- \boldsymbol{\beta} \Vert^2) = \min_K\mathbb{E}(\Vert \mathbf{K}\mathbf{W}\mathbf{\boldsymbol{\beta}}+\mathbf{K}\boldsymbol{\epsilon}- \boldsymbol{\beta} \Vert^2) $$ and since $\boldsymbol{\epsilon}$ is the only random variable here, this simplifies to $$ \min_K \Vert \mathbf{K}\mathbf{W}\mathbf{\boldsymbol{\beta}}-\boldsymbol{\beta} \Vert^2 + \mathbb{E}(\Vert\mathbf{K}\boldsymbol{\epsilon} \Vert^2) $$ The next step is to compute $$ \mathbb{E}(\Vert\mathbf{K}\boldsymbol{\epsilon} \Vert^2) = \mathbb{E}(\boldsymbol{\epsilon}^T \mathbf{K}^T \mathbf{K}^T \boldsymbol{\epsilon})=\Tr(\mathbf{K \mathbb{E}(\boldsymbol{\epsilon}\boldsymbol{\epsilon}^T) K}^T)=\Tr(\mathbf{K Q K}^T) $$ using the properties of the trace of a matrix. We can assemble everything as $$ \min_K \Vert \mathbf{K W} \boldsymbol{\beta} - \boldsymbol{\beta}\Vert^2 + \Tr(\mathbf{K Q K}^T) $$ Now, if we were to solve this for $\mathbf{K}$, it would be a function of $ \boldsymbol{\beta}$, which is the same thing as saying that the estimator, $ \boldsymbol{\hat{\beta}}$, is a function of what we are trying to estimate, $\boldsymbol{\beta}$, which makes no sense. However, writing this out tells us that if we had $\mathbf{K W}= \mathbf{I}$, then the first term vanishes and the problem simplifies to $$ \min_K \Tr(\mathbf{K Q K}^T) $$ with the contraint, $$ \mathbf{KW} = \mathbf{I} $$ This requirement is the same as asserting that the estimator is unbiased, $$ \mathbb{E}(\boldsymbol{\hat{\beta}})=\mathbf{KW} \boldsymbol{\beta} = \boldsymbol{\beta} $$ To line this problem up with our earlier work, let's consider the $i^{th}$ column of $\mathbf{K}$, $\mathbf{k}_i$. Now, we can re-write the problem as $$ \min_k (\mathbf{k}_i^T \mathbf{Q} \mathbf{k}_i) $$ with $$ \mathbf{k}_i^T \mathbf{W} = \mathbf{e}_i $$ and from our previous work on contrained optimization, we know the solution to this: $$ \mathbf{k}_i = \mathbf{Q}^{-1} \mathbf{W}(\mathbf{W}^T \mathbf{Q^{-1} W})^{-1}\mathbf{e}_i $$ Now all we have to do is stack these together for the general solution: $$ \mathbf{K} = (\mathbf{W}^T \mathbf{Q^{-1} W})^{-1} \mathbf{W}^T\mathbf{Q}^{-1} $$ It's easy when you have all of the concepts lined up! For completeness, the covariance of the error is $$ \mathbb{E}(\hat{\boldsymbol{\beta}}-\boldsymbol{\beta}) (\hat{\boldsymbol{\beta}}-\boldsymbol{\beta})^T = \mathbb{E}(\mathbf{K} \boldsymbol{\epsilon} \boldsymbol{\epsilon}^T \mathbf{K}^T)=\mathbf{K}\mathbf{Q}\mathbf{K}^T =(\mathbf{W}^T \mathbf{Q}^{-1} \mathbf{W})^{-1} $$ <!-- # !bc pycod --> <!-- # from mpl_toolkits.mplot3d import proj3d --> <!-- # from numpy.linalg import inv --> <!-- # import matplotlib.pyplot as plt --> <!-- # import numpy as np --> <!-- # from numpy import matrix, linalg, ones, array --> <!-- # Q = np.eye(3)*.1 # error covariance matrix --> <!-- # beta = matrix(ones((2,1))) # this is what we are trying estimate --> <!-- # W = matrix([[1,2], --> <!-- # [2,3], --> <!-- # [1,1]]) --> <!-- # ntrials = 50 --> <!-- # epsilon = np.random.multivariate_normal((0,0,0),Q,ntrials).T --> <!-- # y=W*beta+epsilon --> <!-- # --> <!-- # K=inv(W.T*inv(Q)*W)*matrix(W.T)*inv(Q) --> <!-- # b=K*y #estimated beta from data --> <!-- # --> <!-- # fig = plt.figure() --> <!-- # fig.set_size_inches([6,6]) --> <!-- # --> <!-- # # some convenience definitions for plotting --> <!-- # bb = array(b) --> <!-- # bm = bb.mean(1) --> <!-- # yy = array(y) --> <!-- # ax = fig.add_subplot(111, projection='3d') --> <!-- # --> <!-- # ax.plot3D(yy[0,:],yy[1,:],yy[2,:],'mo',label='y',alpha=0.3) --> <!-- # ax.plot3D([beta[0,0],0],[beta[1,0],0],[0,0],'r-',label=r'$\beta$') --> <!-- # ax.plot3D([bm[0],0],[bm[1],0],[0,0],'g-',lw=1,label=r'$\widehat{\beta}_m$') --> <!-- # ax.plot3D(bb[0,:],bb[1,:],0*bb[1,:],'.g',alpha=0.5,lw=3,label=r'$\hat{\beta}$') --> <!-- # ax.legend(loc=0,fontsize=18) --> <!-- # plt.show() --> <!-- # !ec --> <!-- dom:FIGURE: [fig-statistics/Gauss_Markov_001.png, width=500 frac=0.85] The red circles show the points to be estimated in the *xy*-plane by the black points. <div id="fig:Gauss_Markov_001"></div> --> <!-- begin figure --> <div id="fig:Gauss_Markov_001"></div> <p>The red circles show the points to be estimated in the <em>xy</em>-plane by the black points.</p> <img src="fig-statistics/Gauss_Markov_001.png" width=500> <!-- end figure --> Figure shows the simulated $\mathbf{y}$ data as red circles. The black dots show the corresponding estimates, $\boldsymbol{\hat{\beta}}$ for each sample. The black lines show the true value of $\boldsymbol{\beta}$ versus the average of the estimated $\boldsymbol{\beta}$-values, $\widehat{\boldsymbol{\beta}_m}$. The matrix $\mathbf{K}$ maps the red circles in the corresponding dots. Note there are many possible ways to map the red circles to the plane, but the $\mathbf{K}$ is the one that minimizes the MSE for $\boldsymbol{\beta}$. Programming Tip. Although the full source code is available in the corresponding IPython Notebook, the following snippets provide a quick walkthrough. To simulate the target data, we define the relevant matrices below,
Q = np.eye(3)*0.1 # error covariance matrix # this is what we are trying estimate beta = matrix(ones((2,1))) W = matrix([[1,2], [2,3], [1,1]])
chapters/statistics/notebooks/Gauss_Markov.ipynb
unpingco/python_for_prob_stats_ml
mit
Then, we generate the noise terms and create the simulated data, $y$,
ntrials = 50 epsilon = np.random.multivariate_normal((0,0,0),Q,ntrials).T y=W*beta+epsilon
chapters/statistics/notebooks/Gauss_Markov.ipynb
unpingco/python_for_prob_stats_ml
mit
<!-- dom:FIGURE: [fig-statistics/Gauss_Markov_002.png, width=500 frac=0.85] Focusing on the *xy*-plane in [Figure](#fig:Gauss_Markov_001), the dashed line shows the true value for $\boldsymbol{\beta}$ versus the mean of the estimated values $\widehat{\boldsymbol{\beta}}_m$. <div id="fig:Gauss_Markov_002"></div> --> <!-- begin figure --> <div id="fig:Gauss_Markov_002"></div> <p>Focusing on the <em>xy</em>-plane in [Figure](#fig:Gauss_Markov_001), the dashed line shows the true value for $\boldsymbol{\beta}$ versus the mean of the estimated values $\widehat{\boldsymbol{\beta}}_m$.</p> <img src="fig-statistics/Gauss_Markov_002.png" width=500> <!-- end figure --> Figure shows more detail in the horizontal xy-plane of Figure. Figure shows the dots, which are individual estimates of $\boldsymbol{\hat{\beta}}$ from the corresponding simulated $\mathbf{y}$ data. The dashed line is the true value for $\boldsymbol{\beta}$ and the filled line ($\widehat{\boldsymbol{\beta}}_m$) is the average of all the dots. The gray ellipse provides an error ellipse for the covariance of the estimated $\boldsymbol{\beta}$ values. Programming Tip. Although the full source code is available in the corresponding IPython Notebook, the following snippets provide a quick walkthrough of the construction of Figure. To draw the ellipse, we need to import the patch primitive,
# %matplotlib inline # from matplotlib.patches import Ellipse
chapters/statistics/notebooks/Gauss_Markov.ipynb
unpingco/python_for_prob_stats_ml
mit
To compute the parameters of the error ellipse based on the covariance matrix of the individual estimates of $\boldsymbol{\beta}$ in the bm_cov variable below,
# U,S,V = linalg.svd(bm_cov) # err = np.sqrt((matrix(bm))*(bm_cov)*(matrix(bm).T)) # theta = np.arccos(U[0,1])/np.pi*180
chapters/statistics/notebooks/Gauss_Markov.ipynb
unpingco/python_for_prob_stats_ml
mit
Then, we draw the add the scaled ellipse in the following,
# ax.add_patch(Ellipse(bm,err*2/np.sqrt(S[0]), # err*2/np.sqrt(S[1]), # angle=theta,color='gray')) from __future__ import division from mpl_toolkits.mplot3d import proj3d from numpy.linalg import inv import matplotlib.pyplot as plt import numpy as np from numpy import matrix, linalg, ones, array Q = np.eye(3)*.1 # error covariance matrix beta = matrix(ones((2,1))) # this is what we are trying estimate W = matrix([[1,2], [2,3], [1,1]]) ntrials = 50 epsilon = np.random.multivariate_normal((0,0,0),Q,ntrials).T y=W*beta+epsilon K=inv(W.T*inv(Q)*W)*matrix(W.T)*inv(Q) b=K*y #estimated beta from data fig = plt.figure() fig.set_size_inches([6,6]) # some convenience definitions for plotting bb = array(b) bm = bb.mean(1) yy = array(y) ax = fig.add_subplot(111, projection='3d') ax.plot3D(yy[0,:],yy[1,:],yy[2,:],'ro',label='y',alpha=0.3) ax.plot3D([beta[0,0],0],[beta[1,0],0],[0,0],'k--',label=r'$\beta$') ax.plot3D([bm[0],0],[bm[1],0],[0,0],'k-',lw=1,label=r'$\widehat{\beta}_m$') ax.plot3D(bb[0,:],bb[1,:],0*bb[1,:],'.k',alpha=0.5,lw=3,label=r'$\widehat{\beta}$') ax.legend(loc=0,fontsize=18) ax.set_xlabel('$x_1$',fontsize=20) ax.set_ylabel('$x_2$',fontsize=20) ax.set_zlabel('$x_3$',fontsize=20) fig.tight_layout() from matplotlib.patches import Ellipse fig, ax = plt.subplots() fig.set_size_inches((6,6)) ax.set_aspect(1) ax.plot(bb[0,:],bb[1,:],'ko',alpha=.3) ax.plot([beta[0,0],0],[beta[1,0],0],'k--',label=r'$\beta$',lw=3) ax.plot([bm[0],0],[bm[1],0],'k-',lw=3,label=r'$\widehat{\beta}_m$') ax.legend(loc=0,fontsize=18) bm_cov = inv(W.T*Q*W) U,S,V = linalg.svd(bm_cov) err = np.sqrt((matrix(bm))*(bm_cov)*(matrix(bm).T)) theta = np.arccos(U[0,1])/np.pi*180 ax.add_patch(Ellipse(bm,err*2/np.sqrt(S[0]),err*2/np.sqrt(S[1]) ,angle=theta,color='gray',alpha=0.5)) ax.set_xlabel('$x_1$',fontsize=20) ax.set_ylabel('$x_2$',fontsize=20) fig.tight_layout()
chapters/statistics/notebooks/Gauss_Markov.ipynb
unpingco/python_for_prob_stats_ml
mit
Now reset the notebook's session kernel! Since we're no longer using Cloud Dataflow, we'll be using the python3 kernel from here on out so don't forget to change the kernel if it's still python2.
# Import helpful libraries and setup our project, bucket, and region import os import tensorflow as tf import tensorflow_hub as hub PROJECT = "cloud-training-demos" # REPLACE WITH YOUR PROJECT ID BUCKET = "cloud-training-demos-ml" # REPLACE WITH YOUR BUCKET NAME REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1 # do not change these os.environ["PROJECT"] = PROJECT os.environ["BUCKET"] = BUCKET os.environ["REGION"] = REGION os.environ["TFVERSION"] = "1.13" %%bash gcloud config set project $PROJECT gcloud config set compute/region $REGION %%bash if ! gsutil ls | grep -q gs://${BUCKET}/hybrid_recommendation/preproc; then gsutil mb -l ${REGION} gs://${BUCKET} # copy canonical set of preprocessed files if you didn't do preprocessing notebook gsutil -m cp -R gs://cloud-training-demos/courses/machine_learning/deepdive/10_recommendation/hybrid_recommendation gs://${BUCKET} fi
courses/machine_learning/deepdive/10_recommend/labs/hybrid_recommendations/hybrid_recommendations.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Now we'll create our model function
# Create custom model function for our custom estimator def model_fn(features, labels, mode, params): # TODO: Create neural network input layer using our feature columns defined above # TODO: Create hidden layers by looping through hidden unit list # TODO: Compute logits (1 per class) using the output of our last hidden layer # TODO: Find the predicted class indices based on the highest logit (which will result in the highest probability) predicted_classes = # Read in the content id vocabulary so we can tie the predicted class indices to their respective content ids with file_io.FileIO(tf.gfile.Glob(filename = "gs://{}/hybrid_recommendation/preproc/vocabs/content_id_vocab.txt*".format(BUCKET))[0], mode = "r") as ifp: content_id_names = tf.constant(value = [x.rstrip() for x in ifp]) # Gather predicted class names based predicted class indices predicted_class_names = tf.gather(params = content_id_names, indices = predicted_classes) # If the mode is prediction if mode == tf.estimator.ModeKeys.PREDICT: # Create predictions dict predictions_dict = { "class_ids": tf.expand_dims(input = predicted_classes, axis = -1), "class_names" : tf.expand_dims(input = predicted_class_names, axis = -1), "probabilities": tf.nn.softmax(logits = logits), "logits": logits } # Create export outputs export_outputs = {"predict_export_outputs": tf.estimator.export.PredictOutput(outputs = predictions_dict)} return tf.estimator.EstimatorSpec( # return early since we"re done with what we need for prediction mode mode = mode, predictions = predictions_dict, loss = None, train_op = None, eval_metric_ops = None, export_outputs = export_outputs) # Continue on with training and evaluation modes # Create lookup table using our content id vocabulary table = tf.contrib.lookup.index_table_from_file( vocabulary_file = tf.gfile.Glob(filename = "gs://{}/hybrid_recommendation/preproc/vocabs/content_id_vocab.txt*".format(BUCKET))[0]) # Look up labels from vocabulary table labels = table.lookup(keys = labels) # TODO: Compute loss using the correct type of softmax cross entropy since this is classification and our labels (content id indices) and probabilities are mutually exclusive loss = # If the mode is evaluation if mode == tf.estimator.ModeKeys.EVAL: # Compute evaluation metrics of total accuracy and the accuracy of the top k classes accuracy = tf.metrics.accuracy(labels = labels, predictions = predicted_classes, name = "acc_op") top_k_accuracy = tf.metrics.mean(values = tf.nn.in_top_k(predictions = logits, targets = labels, k = params["top_k"])) map_at_k = tf.metrics.average_precision_at_k(labels = labels, predictions = predicted_classes, k = params["top_k"]) # Put eval metrics into a dictionary eval_metric_ops = { "accuracy": accuracy, "top_k_accuracy": top_k_accuracy, "map_at_k": map_at_k} # Create scalar summaries to see in TensorBoard tf.summary.scalar(name = "accuracy", tensor = accuracy[1]) tf.summary.scalar(name = "top_k_accuracy", tensor = top_k_accuracy[1]) tf.summary.scalar(name = "map_at_k", tensor = map_at_k[1]) return tf.estimator.EstimatorSpec( # return early since we"re done with what we need for evaluation mode mode = mode, predictions = None, loss = loss, train_op = None, eval_metric_ops = eval_metric_ops, export_outputs = None) # Continue on with training mode # If the mode is training assert mode == tf.estimator.ModeKeys.TRAIN # Create a custom optimizer optimizer = tf.train.AdagradOptimizer(learning_rate = params["learning_rate"]) # Create train op train_op = optimizer.minimize(loss = loss, global_step = tf.train.get_global_step()) return tf.estimator.EstimatorSpec( # final return since we"re done with what we need for training mode mode = mode, predictions = None, loss = loss, train_op = train_op, eval_metric_ops = None, export_outputs = None)
courses/machine_learning/deepdive/10_recommend/labs/hybrid_recommendations/hybrid_recommendations.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
http://seaborn.pydata.org/generated/seaborn.barplot.html#seaborn.barplot
import seaborn as sns; sns.set(color_codes=True) tips = sns.load_dataset("tips") ax = sns.barplot(x="day", y="total_bill", data=tips) ax = sns.barplot(x="day", y="total_bill", hue="sex", data=tips)
code/visualization_with_seaborn.ipynb
computational-class/cjc2016
mit
https://github.com/yufeiminds/echarts-python pip install echarts-python
from echarts import Echart, Legend, Bar, Axis, Line from IPython.display import HTML chart = Echart('GDP', 'This is a fake chart') chart.use(Bar('China', [2, 3, 4, 5])) chart.use(Legend(['GDP'])) chart.use(Axis('category', 'bottom', data=['Nov', 'Dec', 'Jan', 'Feb'])) chart = Echart('GDP', 'This is a fake chart') chart.use(Line('China', [2, 5, 4, 7])) chart.use(Legend(['GDP'])) chart.use(Axis('category', 'bottom', data=['Nov', 'Dec', 'Jan', 'Feb'])) chart.plot()
code/visualization_with_seaborn.ipynb
computational-class/cjc2016
mit
Load data Data source on Kaggle. The data is split into train and test sets. One wave file is one sample. We first load all wave files as a vector of integers into a lists of lists.
def read_dir(dirname, X, y, label): for fn in os.listdir(dirname): w = wavfile.read(os.path.join(dirname, fn)) assert w[0] == 16000 X.append(w[1]) y.append(label) X_train_cat = [] y_train_cat = [] X_test_cat = [] y_test_cat = [] X_train_dog = [] y_train_dog = [] X_test_dog = [] y_test_dog = [] read_dir("data/cat_dog/train/cat", X_train_cat, y_train_cat, 1) read_dir("data/cat_dog/train/dog", X_train_dog, y_train_dog, 0) read_dir("data/cat_dog/test/cat", X_test_cat, y_test_cat, 1) read_dir("data/cat_dog/test/dog", X_test_dog, y_test_dog, 0) len(X_train_cat), len(X_train_dog)
notebooks/bi_ea_demo/cat_dog.ipynb
juditacs/labor
lgpl-3.0
We discard some of the cat files to balance the classes, then merge the cat and dog matrices.
maxlen = min(len(X_train_cat), len(X_train_dog)) X_train = X_train_cat[:maxlen] X_train.extend(X_train_dog[:maxlen]) y_train = y_train_cat[:maxlen] y_train.extend(y_train_dog[:maxlen]) print(maxlen, len(X_train)) maxlen = min(len(X_test_cat), len(X_test_dog)) X_test = X_test_cat[:maxlen] X_test.extend(X_test_dog[:maxlen]) y_test = y_test_cat[:maxlen] y_test.extend(y_test_dog[:maxlen]) maxlen
notebooks/bi_ea_demo/cat_dog.ipynb
juditacs/labor
lgpl-3.0
Extract samples Wave files are too long and too few. Let's split them into smaller parts. One part is going to be 10000 sample long.
def create_padded_mtx(list_of_lists, labels, maxlen, pad=0): X = [] y = [] for i, x in enumerate(list_of_lists): x_mult = np.array_split(x, x.shape[0] // maxlen) for x in x_mult: pad_size = maxlen-x.shape[0] if pad_size > 0: pad = np.zeros((maxlen-l.shape[0])) x = np.concatenate((pad, l)) X.append(x[-maxlen:]) label = labels[i] y.extend([label] * len(x_mult)) return np.array(X), np.array(y) sample_len = 10000 X_train, y_train = create_padded_mtx(X_train, y_train, sample_len) X_test, y_test = create_padded_mtx(X_test, y_test, sample_len)
notebooks/bi_ea_demo/cat_dog.ipynb
juditacs/labor
lgpl-3.0
Scale samples Wav samples vary in a large range, we prefere values closer to zero. StandardScaler scales all values so that the dataset has a mean of 0 and a standard deviation of 1. Note that we fit StandardScaler on the train data only and use those value to transform both the train and the test matrices.
scaler = StandardScaler() X_train = scaler.fit_transform(X_train) X_test = scaler.transform(X_test) np.mean(X_train), np.std(X_train), np.mean(X_test), np.std(X_test)
notebooks/bi_ea_demo/cat_dog.ipynb
juditacs/labor
lgpl-3.0
Shuffle data
shuf_idx = np.arange(X_train.shape[0]) np.random.shuffle(shuf_idx) X_train = X_train[shuf_idx] y_train = y_train[shuf_idx] X_train.shape, X_test.shape
notebooks/bi_ea_demo/cat_dog.ipynb
juditacs/labor
lgpl-3.0
The number of unique cat and dog samples:
cnt = np.unique(y_train, return_counts=True)[1] print("Number of vaus: {}\nNumber of mieuws: {}".format(cnt[0], cnt[1]))
notebooks/bi_ea_demo/cat_dog.ipynb
juditacs/labor
lgpl-3.0
Fully connected feed forward network Define the model
input_layer = Input(batch_shape=(None, X_train.shape[1])) layer = Dense(100, activation="sigmoid")(input_layer) # randomly disable 20% of the neurons, prevents or reduces overfitting layer = Dropout(.2)(layer) layer = Dense(100, activation="sigmoid")(input_layer) layer = Dropout(.2)(layer) layer = Dense(1, activation="sigmoid")(layer) model = Model(inputs=input_layer, outputs=layer) model.compile("rmsprop", loss="binary_crossentropy")
notebooks/bi_ea_demo/cat_dog.ipynb
juditacs/labor
lgpl-3.0
Train the model Early stopping stops the training if validation loss does not decrease anymore.
%time ea = EarlyStopping(patience=2) model.fit(X_train, y_train, epochs=100, batch_size=32, verbose=1, validation_split=.1, callbacks=[ea])
notebooks/bi_ea_demo/cat_dog.ipynb
juditacs/labor
lgpl-3.0
Predict labels
pred = model.predict(X_test) labels = np.round(pred)
notebooks/bi_ea_demo/cat_dog.ipynb
juditacs/labor
lgpl-3.0
Precision, recall and F-score
prec, rec, F, _ = precision_recall_fscore_support(y_test, labels) print("Dog\n===========\nprec:{}\nrec:{}\nF-score:{}".format(prec[0], rec[0], F[0])) print("\nCat\n===========\nprec:{}\nrec:{}\nF-score:{}".format(prec[1], rec[1], F[1]))
notebooks/bi_ea_demo/cat_dog.ipynb
juditacs/labor
lgpl-3.0
Convolutional neural network CNNs are not only good at image processing but also at handling long temporal data such as audio files. Convert data to 3D tensors CNNs and RNNs require 3D tensors instead of 2D tensors (normal matrices). 3D tensors are usually shaped as batch_size x timestep x feature_number, where batch_size is the number of samples fed to the network at once, timestep is the number of time steps the samples cover and feature_number is the dimension of the feature vectors. Audio files are one dimensional, so feature_number is 1. Reshaping X_train and X_test:
X_train_3d = X_train.reshape(X_train.shape[0], X_train.shape[1], 1) X_test_3d = X_test.reshape(X_test.shape[0], X_test.shape[1], 1)
notebooks/bi_ea_demo/cat_dog.ipynb
juditacs/labor
lgpl-3.0
Define the model
# list of convolutional layers, try adding more or changing the parameters conv_layers = [ {'filters': 200, 'kernel_size': 40, 'strides': 2, 'padding': "same", 'activation': "relu"}, {'filters': 200, 'kernel_size': 10, 'strides': 10, 'padding': "same", 'activation': "relu"}, {'filters': 50, 'kernel_size': 10, 'strides': 10, 'padding': "same", 'activation': "relu"}, ] input_layer = Input(batch_shape=(None, X_train_3d.shape[1], 1)) layer = Conv1D(**(conv_layers[0]))(input_layer) for cfg in conv_layers[1:]: layer = Conv1D(**cfg)(layer) layer = Dropout(.2)(layer) # reduce the number of parameters layer = MaxPooling1D(2, padding="same")(layer) layer = LSTM(128)(layer) out = Dense(1, activation="sigmoid")(layer) m = Model(inputs=input_layer, outputs=out) m.compile("adam", loss='binary_crossentropy')
notebooks/bi_ea_demo/cat_dog.ipynb
juditacs/labor
lgpl-3.0
Train the model
%%time ea = EarlyStopping(patience=2) hist = m.fit(X_train_3d, y_train, epochs=1000, batch_size=128, validation_split=.1, callbacks=[ea])
notebooks/bi_ea_demo/cat_dog.ipynb
juditacs/labor
lgpl-3.0
Predict labels
pred = m.predict(X_test_3d) labels = (pred > 0.5).astype(int)
notebooks/bi_ea_demo/cat_dog.ipynb
juditacs/labor
lgpl-3.0
Evaluate
prec, rec, F, _ = precision_recall_fscore_support(y_test, labels) print("Dog\n===========\nprec:{}\nrec:{}\nF-score:{}".format(prec[0], rec[0], F[0])) print("\nCat\n===========\nprec:{}\nrec:{}\nF-score:{}".format(prec[1], rec[1], F[1]))
notebooks/bi_ea_demo/cat_dog.ipynb
juditacs/labor
lgpl-3.0
Now let us write a small class implementing 1D semi-infinite leads. The class below is just a simple collection of the analytic formilae for spectrum, groupvelocity and the surface Green's function. Most functions have some meaningfull standard parameters (e.g.: hopping $\gamma=-1$, on-site potential $\varepsilon_0=0$). The functions are written keeping in mind that for a generic scattering problem energy is the important variable and not the wavenumber.
class lead1D: 'A class for simple 1D leads' def __init__(self,eps0=0,gamma=-1,**kwargs): 'We assume real hopping \gamma and onsite \epsilon_0 parameters!' self.eps0=eps0 self.gamma=gamma return def Ek(self,k): 'Spectrum as a function of k' return self.eps0+2*self.gamma*cos(k) def kE(self,E,**kwargs): ''' Spectrum as a function of E. If keyword a=True is given than it gives back two k values, one positive and one negative. ''' a = kwargs.get('a',False) k=arccos((E-self.eps0)/(2*self.gamma)) if a: return array([-k,k]) else: return k def vE(self,E=0,**kwargs): ''' Group velocity as a function of E. If keyword a=True is given than it gives back two v values, one positive and one negative. ''' a = kwargs.get('a',False) k=self.kE(E) v= -2*self.gamma*sin(k) if a: return array([-v,v]) else: return v def sgf(self,E=0): ''' Surgace Green's function of a seminfinite 1D lead. ''' return exp(1.0j *self.kE(E))/self.gamma def sgfk(self,k=pi/2): ''' Surgace Green's function of a seminfinite 1D lead in terms of k. ''' return exp(1.0j*k)/self.gamma def vk(self,k=pi/2): ''' Group velocity in terms of k ''' return -2*self.gamma*sin(k)
1D.ipynb
oroszl/mezo
gpl-3.0
Let us investigate the following simple scattering setups: For each system we shall write a small function that generates the scattering matrix of the problem as a function of the energy $E$ and other relevant parameters. We start by the tunnel junction where we only have $\alpha$ the hopping matrix element coupling the two leads as parameter
def Smat_tunnel(E,alpha): #Definition of the leads L1=lead1D() L2=lead1D() E=E+0.0000001j # In order to make stuff meaningfull for # outside of the band we add a tiny # imaginary part to the energy #Green's function of decoupled leads g0= array([[L1.sgf(E=E),0 ], [0 ,L2.sgf(E=E)]]) #Potential coupling the leads V= array([[0 ,alpha], [alpha,0]]) #Dyson's equation G=inv(inv(g0)-V) #is the channel open? #since both sides have the same #structure they are open or closed #at the same time isopen=int(imag(L1.kE(E))<0.001) #vector of the sqrt of the velocities vs=sqrt(array([[L1.vE(E=E)],[L2.vE(E=E)]])) #Scattering matrix from Fisher-Lee relations return matrix(1.0j*G*(vs*vs.T)-eye(2))*isopen
1D.ipynb
oroszl/mezo
gpl-3.0
Now we write a small script tog generate a figure interactively depending on $\alpha$ so we can explore the parameterspace. We have also included an extra factor on top of the Fisher-Lee relations taking in if a channel is open or not.
energy_range=linspace(-4,4,1000) # this will be the plotted energy range figsize(8,6) # setting the figure size fts=20 # the default font is a bit too small so we make it larger #using the interact decorator we can have an interactive plot @interact(alpha=FloatSlider(min=-2,max=2,step=0.1,value=-1,description=r'$\alpha$')) def tunnel(alpha=-1): ''' This function draws a picture of the transmission and reflection coefficitents of a 1D tunneljunction ''' TR=[] # we shall collect the values to be plotted in these variables REF=[] for ene in energy_range: # energy scan SS=Smat_tunnel(ene,alpha) # obtain S-matrix TR.append(abs(SS[0,1])**2) # extract transmission coeff. REF.append(abs(SS[0,0])**2) # extract reflection coeff. TR=array(TR) REF=array(REF) # make a pretty plot plot(energy_range,TR,label='T',linewidth=3) plot(energy_range,REF,label='R',linewidth=2) plot(energy_range,REF+TR,label='T+R',linewidth=2) plot(energy_range,zeros_like(energy_range),'k-') ylim(-0.2,1.8); xticks(fontsize=fts) yticks(fontsize=fts) xlabel('Energy',fontsize=fts); legend(fontsize=fts); grid(); title('Transmission for a tunnel junction',fontsize=fts)
1D.ipynb
oroszl/mezo
gpl-3.0
Similarly to the tunnel junction we start by a function that generates the scattering matrix. We have to be carefull since now the Green's function of the decoupled system is a $3\times3$ object.
def Smat_BW(E,t1,t2,eps1): #Definition of the leads L1=lead1D() L2=lead1D() E=E+0.000000001j # In order to make stuff meaningfull for # outside of the band we add a tiny # imaginary part to the energy #Green's function of decoupled system #Note that the Green's function of a #decoupled single site is just the reciprocal #of (E-eps1) !! g0= array([[L1.sgf(E=E),0 ,0], [0 ,L2.sgf(E=E),0], [0 ,0 ,1/(E-eps1)]]) #Potential coupling the leads V= array([[0 ,0 ,t1], [0 ,0 ,t2], [t1,t2,0 ]]) #Dyson's equation G=inv(inv(g0)-V) #is the channel open? isopen=int(imag(L1.kE(E))<0.001) #vector of the sqrt of the velocities vs=sqrt(array([[L1.vE(E=E)],[L2.vE(E=E)]])) #Scattering matrix from Fisher-Lee relations #Note that we only need the matrix elements #of the Green's functions on the "surface" #that is only the upper 2x2 part! return matrix(1.0j*G[:2,:2]*(vs*vs.T)-eye(2))*isopen
1D.ipynb
oroszl/mezo
gpl-3.0
Now we can write again a script for a nice interactive plot
energy_range=linspace(-4,4,1000) figsize(8,6) fts=20 @interact(t1=FloatSlider(min=-2,max=2,step=0.1,value=-1,description=r'$\gamma_L$'), t2=FloatSlider(min=-2,max=2,step=0.1,value=-1,description=r'$\gamma_R$'), eps1=FloatSlider(min=-2,max=2,step=0.1,value=0,description=r'$\varepsilon_1$')) def BW(t1=-1,t2=-1,eps1=0): TR=[] REF=[] for ene in energy_range: SS=Smat_BW(ene,t1,t2,eps1) TR.append(abs(SS[0,1])**2) REF.append(abs(SS[0,0])**2) TR=array(TR) REF=array(REF) plot(energy_range,TR,label='T',linewidth=3) plot(energy_range,REF,label='R',linewidth=2) plot(energy_range,REF+TR,label='T+R',linewidth=1) plot(energy_range,zeros_like(energy_range),'k-') ylim(-0.2,1.8); xticks(fontsize=fts) yticks(fontsize=fts) xlabel('Energy',fontsize=fts); legend(fontsize=fts); grid(); title('Transmission for a resonant level',fontsize=fts)
1D.ipynb
oroszl/mezo
gpl-3.0
Exercises Using the above two examples write a a code that explores Fano resonance in a resonant level coupled sideways to a lead! Using the above examples write a code that explores the Aharonov-Bohm effect! Some more examples A simple potential step. The onsite potential in the right lead is shifted by $V_0$. Now in general the number of open chanels in the left and right lead are not the same. If the right channel is closed than we can only have reflection.
def Smat_step(E,V0): #Definition of the leads L1=lead1D() L2=lead1D(eps0=V0) E=E+0.0001j # In order to make stuff meaningfull for # outside of the band we add a tiny # imaginary part to the energy #Green's function of decoupled leads g0= array([[L1.sgf(E=E),0 ], [0 ,L2.sgf(E=E)]]) #Potential coupling the leads V= array([[0 ,-1], [-1,0]]) #Dyson's equation G=inv(inv(g0)-V) #is the channel open? isopen=array([[float(imag(L1.kE(E))<0.001)],[float(imag(L2.kE(E))<0.001)]]) #vector of the sqrt of the velocities vs=sqrt(array([[L1.vE(E=E)],[L2.vE(E=E)]])) #Scattering matrix from Fisher-Lee relations return matrix((1.0j*G*(vs*vs.T)-eye(2))*isopen*isopen.T) energy_range=linspace(-4,4,1000) figsize(8,6) fts=20 @interact(V0=FloatSlider(min=-2,max=2,step=0.1,value=0,description=r'$V_0$')) def step(V0): TR=[] REF=[] for ene in energy_range: SS=Smat_step(ene,V0) TR.append(abs(SS[0,1])**2) REF.append(abs(SS[0,0])**2) TR=array(TR) REF=array(REF) plot(energy_range,TR,label='T',linewidth=3) plot(energy_range,REF,label='R',linewidth=2) plot(energy_range,REF+TR,label='T+R',linewidth=1) plot(energy_range,zeros_like(energy_range),'k-') plot() ylim(-0.2,1.8); xticks(fontsize=fts) yticks(fontsize=fts) xlabel('Energy',fontsize=fts); legend(fontsize=fts); grid(); title('Transmission for a potential step',fontsize=fts)
1D.ipynb
oroszl/mezo
gpl-3.0
This is a simple model of a Fabry-Perot resonator realised by two tunnel barriers. This example also illustrates how we can have a larger scattering region.
def Smat_FP(E,t1,t2,N): # Definition of the leads L1=lead1D() L2=lead1D() E=E+0.000000001j # In order to make stuff meaningfull for # outside of the band we add a tiny # imaginary part to the energy # Green's function of decoupled system # leads g0L= array([[L1.sgf(E=E),0 ], [0 ,L2.sgf(E=E)]]) # the quantum dot g0S= inv(E*eye(N)-(-eye(N,N,-1)-eye(N,N,1))) Z=zeros((len(g0L[0,:]),len(g0S[:,0]))) # decoupled full Green's function is built up by stacking the # g0=vstack((hstack((g0L,Z )), hstack((Z.T,g0S)))) v=zeros_like(Z) v[0,0]=t1 v[-1,-1]=t2 #Potential coupling the leads V=vstack((hstack((zeros_like(g0L),v )), hstack((v.T ,zeros_like(g0S))) )) #Dyson's equation G=inv(inv(g0)-V) #is the channel open? isopen=array([[float(imag(L1.kE(E))<0.001)],[float(imag(L2.kE(E))<0.001)]]) #vector of the sqrt of the velocities vs=sqrt(array([[L1.vE(E=E)],[L2.vE(E=E)]])) #Scattering matrix from Fisher-Lee relations return matrix((1.0j*G[0:2,0:2]*(vs*vs.T)-eye(2))*isopen*isopen.T) energy_range=linspace(-4,4,1000) figsize(8,6) fts=20 @interact(t1=FloatSlider(min=-2,max=2,step=0.1,value=-1,description=r'$\gamma_L$'), t2=FloatSlider(min=-2,max=2,step=0.1,value=-1,description=r'$\gamma_R$'), N=IntSlider(min=1,max=10,value=1,description=r'$N$')) def FP(t1=-1,t2=-1,N=1): TR=[] REF=[] for ene in energy_range: SS=Smat_FP(ene,t1,t2,N) TR.append(abs(SS[0,1])**2) REF.append(abs(SS[0,0])**2) TR=array(TR) REF=array(REF) plot(energy_range,TR,label='T',linewidth=3) plot(energy_range,REF,label='R',linewidth=2) plot(energy_range,REF+TR,label='T+R',linewidth=1) plot(energy_range,zeros_like(energy_range),'k-') ylim(-0.2,1.8); xticks(fontsize=fts) yticks(fontsize=fts) xlabel('Energy',fontsize=fts); legend(fontsize=fts); grid(); title('Transmission for a Fabry-Perot resonator',fontsize=fts)
1D.ipynb
oroszl/mezo
gpl-3.0
At this point my kernel crashed and I had to reload the data from CSV. This served as a natural break point, because the API heavy-lifting just finished.
#http://stackoverflow.com/questions/21269399/datetime-dtypes-in-pandas-read-csv col_dtypes = {"":int,"community_area":float,"completion_date":object, "creation_date":object,"latitude":float, "longitude":float,"service_request_number":object, "type_of_service_request":str,"URLS":str,"FIPS":object} parse_dates = ["completion_date","creation_date"] city_data = pd.read_csv("Safety_csv_1.csv",dtype=col_dtypes, parse_dates=parse_dates) city_data.drop('Unnamed: 0',axis=1,inplace=True) city_data.drop('URLS',axis=1,inplace=True) city_data.ix[:,"time_to_completion"] = city_data.ix[:,"completion_date"] - city_data.ix[:,"creation_date"] city_data.ix[:,"time_to_completion"] = city_data.ix[:,"time_to_completion"]/ np.timedelta64(1, 'D') city_data = get_com_areas(city_data) ########
Diagnostic/.ipynb_checkpoints/ML HW0-1-checkpoint.ipynb
anisfeld/MachineLearning
mit