markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
We compute the Backward error, which is small.
np.linalg.norm(b-np.dot(A,x))
SC1v2/04a_linear_systems_of_equations.ipynb
tclaudioe/Scientific-Computing
bsd-3-clause
We now compute the Forward error. This is only possible since we know the exact solution by construction.
np.linalg.norm(x-x_exact) comparing_solutions=np.zeros((n,2)) comparing_solutions[:,0]=x_exact comparing_solutions[:,1]=x df=pd.DataFrame(comparing_solutions, columns=["Exact solution", "Numerical solution"]) df.style.set_caption("Comparison between the exact solution and the NumPy-solve's solution") plt.figure(figsi...
SC1v2/04a_linear_systems_of_equations.ipynb
tclaudioe/Scientific-Computing
bsd-3-clause
<div id='classnotesPaluExample' /> Motivation for PALU in classnotes example Back to TOC
A=np.array([[1e-20, 1],[1,2]]) l,u=lu_decomp(A, show=True) np.dot(l,u) p,l,u=palu_decomp(A, show=True)
SC1v2/04a_linear_systems_of_equations.ipynb
tclaudioe/Scientific-Computing
bsd-3-clause
<div id='iterativeMethodExample' /> Iterative Methods classnotes example Back to TOC
A=np.array([[5.,4.],[1.,3.]]) b=np.array([6.,-1.]) np_sol = np.linalg.solve(A,b) jac_sol = jacobi(A, b, n_iter=50) jac_err = error(jac_sol, np_sol) gauss_seidel_sol = gauss_seidel(A, b, n_iter=50) gauss_seidel_err = error(gauss_seidel_sol, np_sol) sor_sol = sor(A, b, w=1.09310345, n_iter=50) sor_err = error(sor_sol,...
SC1v2/04a_linear_systems_of_equations.ipynb
tclaudioe/Scientific-Computing
bsd-3-clause
Step 2: Model the data The data consists of the description of the products (the demand, the inside and outside costs, and the resource consumption) and the capacity of the various resources.
products = [("kluski", 100, 0.6, 0.8), ("capellini", 200, 0.8, 0.9), ("fettucine", 300, 0.3, 0.4)] # resources are a list of simple tuples (name, capacity) resources = [("flour", 20), ("eggs", 40)] consumptions = {("kluski", "flour"): 0.5, ("kluski", "eggs"): 0.2, ...
examples/mp/jupyter/pasta_production.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Step 3: Prepare the data Data is very simple and is ready to use without any cleasning, massage, refactoring. Step 4: Set up the prescriptive model Create the DOcplex model The model contains all the business constraints and defines the objective. We now use CPLEX Modeling for Python to build a Mixed Integer Programmin...
from docplex.mp.model import Model mdl = Model(name="pasta")
examples/mp/jupyter/pasta_production.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Define the decision variables
inside_vars = mdl.continuous_var_dict(products, name='inside') outside_vars = mdl.continuous_var_dict(products, name='outside')
examples/mp/jupyter/pasta_production.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Express the business constraints Each product can be produced either inside the company or outside, at a higher cost. The inside production is constrained by the company's resources, while outside production is considered unlimited.
# --- constraints --- # demand satisfaction mdl.add_constraints((inside_vars[prod] + outside_vars[prod] >= prod[1], 'ct_demand_%s' % prod[0]) for prod in products) # --- resource capacity --- mdl.add_constraints((mdl.sum(inside_vars[p] * consumptions[p[0], res[0]] for p in products) <= res[1], 'ct_res_%s' % res[0]) fo...
examples/mp/jupyter/pasta_production.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Express the objective Minimizing the production cost for a number of products while satisfying customer demand
total_inside_cost = mdl.sum(inside_vars[p] * p[2] for p in products) total_outside_cost = mdl.sum(outside_vars[p] * p[3] for p in products) mdl.minimize(total_inside_cost + total_outside_cost)
examples/mp/jupyter/pasta_production.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Solve with Decision Optimization Now we have everything we need to solve the model, using Model.solve(). The following cell solves using your local CPLEX (if any, and provided you have added it to your PYTHONPATH variable).
mdl.solve()
examples/mp/jupyter/pasta_production.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Step 5: Investigate the solution and then run an example analysis
obj = mdl.objective_value print("* Production model solved with objective: {:g}".format(obj)) print("* Total inside cost=%g" % total_inside_cost.solution_value) for p in products: print("Inside production of {product}: {ins_var}".format(product=p[0], ins_var=inside_vars[p].solution_value)) print("* Total outside c...
examples/mp/jupyter/pasta_production.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Configure the dataset for performance These are two important methods you should use when loading data to make sure that I/O does not become blocking. .cache() keeps data in memory after it's loaded off disk. This will ensure the dataset does not become a bottleneck while training your model. If your dataset is too lar...
AUTOTUNE = tf.data.experimental.AUTOTUNE train_ds = train_ds.cache().prefetch(buffer_size=AUTOTUNE) val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
courses/machine_learning/deepdive2/text_classification/solutions/word_embeddings.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Using the Embedding layer Keras makes it easy to use word embeddings. Take a look at the Embedding layer. The Embedding layer can be understood as a lookup table that maps from integer indices (which stand for specific words) to dense vectors (their embeddings). The dimensionality (or width) of the embedding is a param...
# Embed a 1,000 word vocabulary into 5 dimensions. # TODO embedding_layer = tf.keras.layers.Embedding(1000, 5)
courses/machine_learning/deepdive2/text_classification/solutions/word_embeddings.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Create a classification model Use the Keras Sequential API to define the sentiment classification model. In this case it is a "Continuous bag of words" style model. * The TextVectorization layer transforms strings into vocabulary indices. You have already initialized vectorize_layer as a TextVectorization layer and bui...
embedding_dim=16 # TODO model = Sequential([ vectorize_layer, Embedding(vocab_size, embedding_dim, name="embedding"), GlobalAveragePooling1D(), Dense(16, activation='relu'), Dense(1) ])
courses/machine_learning/deepdive2/text_classification/solutions/word_embeddings.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Compile and train the model Create a tf.keras.callbacks.TensorBoard.
# TODO tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir="logs")
courses/machine_learning/deepdive2/text_classification/solutions/word_embeddings.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Compile and train the model using the Adam optimizer and BinaryCrossentropy loss.
# TODO model.compile(optimizer='adam', loss=tf.keras.losses.BinaryCrossentropy(from_logits=True), metrics=['accuracy']) model.fit( train_ds, validation_data=val_ds, epochs=10, callbacks=[tensorboard_callback])
courses/machine_learning/deepdive2/text_classification/solutions/word_embeddings.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
With this approach the model reaches a validation accuracy of around 84% (note that the model is overfitting since training accuracy is higher). Note: Your results may be a bit different, depending on how weights were randomly initialized before training the embedding layer. You can look into the model summary to lear...
model.summary()
courses/machine_learning/deepdive2/text_classification/solutions/word_embeddings.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Visualize the model metrics in TensorBoard.
!tensorboard --bind_all --port=8081 --logdir logs
courses/machine_learning/deepdive2/text_classification/solutions/word_embeddings.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Run the following command in Cloud Shell: <code>gcloud beta compute ssh --zone &lt;instance-zone&gt; &lt;notebook-instance-name&gt; --project &lt;project-id&gt; -- -L 8081:localhost:8081</code> Make sure to replace &lt;instance-zone&gt;, &lt;notebook-instance-name&gt; and &lt;project-id&gt;. In Cloud Shell, click Web ...
# TODO weights = model.get_layer('embedding').get_weights()[0] vocab = vectorize_layer.get_vocabulary()
courses/machine_learning/deepdive2/text_classification/solutions/word_embeddings.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Two files will created as vectors.tsv and metadata.tsv. Download both files.
try: from google.colab import files files.download('vectors.tsv') files.download('metadata.tsv') except Exception as e: pass
courses/machine_learning/deepdive2/text_classification/solutions/word_embeddings.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
The append() method takes whatever argument I supply to the function, and inserts it into the next available position in the list. Which, in this case, is the very first position (since the list was previously empty, and lists are ordered). What does our list look like now?
print(x)
lectures/L4.ipynb
eds-uga/csci1360e-su17
mit
In this code example, I've used the number 1 as an index to y. In doing so, I took out the value at index 1 and put it into a variable named first_element. I then printed it, as well as the list y, and voi-- --wait, "2" is the second element. o_O Python and its spiritual progenitors C and C++ are known as zero-indexed ...
print(y[0]) print(y)
lectures/L4.ipynb
eds-uga/csci1360e-su17
mit
Notice the first thing printed is "1". The next line is the entire list (the output of the second print statement). This little caveat is usually the main culprit of errors for new programmers. Give yourself some time to get used to Python's 0-indexed lists. You'll see what I mean when we get to loops. In addition to e...
print(y[-1]) print(y)
lectures/L4.ipynb
eds-uga/csci1360e-su17
mit
So, if we're slicing "from 0 to 3", why is this including index 0 but excluding index 3? When you slice an array, the first (starting) index is inclusive; the second (ending) index, however, is exclusive. In mathematical notation, it would look something like this: $[ starting : ending )$ Therefore, the end index is on...
z = [42, 502.4, "some string", 0] print(z)
lectures/L4.ipynb
eds-uga/csci1360e-su17
mit
Can't do that with tuples, sorry. Tuples are immutable, meaning you can't change them once you've built them.
y[2] = "does this work?"
lectures/L4.ipynb
eds-uga/csci1360e-su17
mit
There are certain situations where this can be very useful. It should be noted that sets can actually be built from lists, so you can build a list and then turn it into a set:
x = [1, 2, 3, 3] s = set(x) # Take the list x and "cast" it as a set. print(s)
lectures/L4.ipynb
eds-uga/csci1360e-su17
mit
If you want to add elements to a set, you can use the add method. If you want to remove elements from a set, you can use the discard or remove methods. But you can't index or slice a set. This is because sets do not preserve any notion of ordering. Think of them like a "bag" of elements: you can add or remove things fr...
s = set([1, 3, 6, 2, 5, 8, 8, 3, 2, 3, 10]) print(10 in s) # Basically asking: is 10 in our set? print(11 in s)
lectures/L4.ipynb
eds-uga/csci1360e-su17
mit
Aside: the in keyword is wonderful. It's a great way of testing if a variable is in your collection (list, set, or tuple) without having to loop over the entire collection looking for it yourself (which we'll see how to do in a bit). Part 3: Dictionaries Dictionaries deserve a section all to themselves. Are you familia...
d = dict() # Or... d = {}
lectures/L4.ipynb
eds-uga/csci1360e-su17
mit
Yes, you use strings as keys! In this way, you can treat dictionaries as "look up" tables--maybe you're storing information on people in a beta testing program. You can store their information by name:
d["shannon_quinn"] = ["some", "personal", "information"] print(d)
lectures/L4.ipynb
eds-uga/csci1360e-su17
mit
(it's basically the entire dictionary, but this method is useful for looping) Speaking of looping... Part 4: Loops Looping is an absolutely essential component in general programming and data science. Whether you're designing a web app or a machine learning model, most of the time you have no idea how much data you're ...
ages = [21, 22, 19, 19, 22, 21, 22, 31]
lectures/L4.ipynb
eds-uga/csci1360e-su17
mit
But that's the problem mentioned earlier: how do you even know how many elements are in your list when your program runs on a web server? If it only had 3 elements the above code would work fine, but the moment your list has 2 items or 4 items, your code would need to be rewritten. for loops One type of loop, the "brea...
the_list = [2, 5, 7, 9] for N in the_list: # Header print(N, end = " ") # Body
lectures/L4.ipynb
eds-uga/csci1360e-su17
mit
There are two main parts to the loop: the header and the body. The header contains 1) the collection we're iterating over (e.g., the list), and 2) the "placeholder" we're using to hold the current value (e.g., N). The body is the chunk of code under the header (indented!) that executes on each element of the collec...
age_sum = 0 # Setting up a variable to store the sum of all the elements in our list. ages = [21, 22, 19, 19, 22, 21, 22, 31] # Here's our list. # Summing up everything in a list is actually pretty easy, code-wise: for age in ages: # For each individual element (stored as "age") in the list "ages"... age_sum +=...
lectures/L4.ipynb
eds-uga/csci1360e-su17
mit
If what's happening isn't clear, go back over it. If it still isn't clear, go over it again. If it still isn't clear, please ask! You can loop through sets and tuples the same way.
s = set([1, 1, 2, 3, 5]) for item in s: print(item, end = " ") t = tuple([1, 1, 2, 3, 5]) for item in t: print(item, end = " ")
lectures/L4.ipynb
eds-uga/csci1360e-su17
mit
Important: INDENTATION MATTERS. You'll notice in these loops that the loop body is distinctly indented relative to the loop header. This is intentional and is indeed how it works! If you fail to indent the body of the loop, Python will complain:
some_list = [3.14159, "random stuff", 4200] for item in some_list: print(item)
lectures/L4.ipynb
eds-uga/csci1360e-su17
mit
With loops, whitespace in Python really starts to matter. If you want many things to happen inside of a loop (and we'll start writing longer loops!), you'll need to indent every line! while loops "While" loops go back yet again to the concept of boolean logic we introduced in an earlier lecture: loop until some conditi...
x = 10 while x < 15: print(x, end = " ") x += 1 # TAKE NOTE OF THIS LINE.
lectures/L4.ipynb
eds-uga/csci1360e-su17
mit
x &lt; 15 is a boolean statement: it is either True or False, depending on the value of x. Initially, this number is 10, which is certainly &lt; 15, so the loop executes. 10 is printed, x is incremented, and the condition is checked again. A potential downside of while loops: forgetting to update the condition inside t...
for i in range(10, 15): print(i, end = " ") # No update needed!
lectures/L4.ipynb
eds-uga/csci1360e-su17
mit
Otra forma de ver las caminatas aleatorias es mediante la distribución de Rayleigh, pero para esto, uno se debe familiarizar uno con la distribución normal. Esta es una distribución muy famosa acreditada a Karl Gauss y la cual aparece en todos los libros de texto porque es una distribución muy importante para los proce...
%matplotlib inline import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D def rayleigh(x0, n, dt, delta): x0 = np.asfarray(x0) shp = (n+1,) + x0.shape # Utilizar distribución normal como "plantilla" para generar la de Rayleigh r = np.random.normal(size=shp, sca...
proyecto-final.ipynb
developEdwin/Proyecto-Browniano
gpl-3.0
Representación matemática del movimiento browniano Muchos matemáticos intentaron describir y darle solución al movimiento browniano y el comportamiento de los gases dentro de contenedores, ¿porqué se mueven así? ¿Porqué tienen patrones muy similares? No fue hasta que Einstein trató de darle solución a este problema, y ...
%matplotlib inline from scipy.stats import norm import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D def brownian(x0, n, dt, delta, out=None): x0 = np.asarray(x0) r = norm.rvs(size=x0.shape + (n,), scale=delta*np.sqrt(dt)) if out is None: out = np.empty(r.shape...
proyecto-final.ipynb
developEdwin/Proyecto-Browniano
gpl-3.0
Konwolucje w sieciach neuronowych (1D) <img style="margin: auto" width="80%" src="http://colah.github.io/posts/2014-07-Conv-Nets-Modular/img/Conv-9-xs.png"/> <img style="margin: auto" width="80%" src="http://colah.github.io/posts/2014-07-Conv-Nets-Modular/img/Conv-9-F.png"/> <img style="margin: auto" width="80%" src="h...
from keras.models import Sequential from keras.layers.core import Dense, Dropout, Activation, Flatten from keras.layers.embeddings import Embedding from keras.layers.convolutional import Convolution1D, MaxPooling1D model = Sequential() model.add(Embedding(5000, 100, input_length=100)) model.add(Dropout(0.25)) model.a...
Wyklady/12/Deep Learning.ipynb
emjotde/UMZ
cc0-1.0
Konwolucje w przetwarzaniu obrazów <img style="margin: auto" width="60%" src="http://devblogs.nvidia.com/parallelforall/wp-content/uploads/sites/3/2015/11/convolution.png"/> <img style="margin: auto" width="60%" src="http://devblogs.nvidia.com/parallelforall/wp-content/uploads/sites/3/2015/11/Convolution_schematic.gif"...
from keras.models import Sequential from keras.layers.core import Dense, Dropout, Activation, Flatten from keras.layers.convolutional import Convolution2D, MaxPooling2D model = Sequential() model.add(Convolution2D(32, 3, 3, border_mode='valid', input_shape=(1, 28, 28))) model.add(Activation('relu')) model.add(MaxPooli...
Wyklady/12/Deep Learning.ipynb
emjotde/UMZ
cc0-1.0
MNIST - Sieci konwolucyjne w Keras
from __future__ import print_function import numpy as np np.random.seed(1337) # for reproducibility from keras.datasets import mnist from keras.models import Sequential from keras.layers.core import Dense, Dropout, Activation, Flatten from keras.layers.convolutional import Convolution2D, MaxPooling2D from keras.utils...
Wyklady/12/Deep Learning.ipynb
emjotde/UMZ
cc0-1.0
RNN i LSTM - Przetwarzanie sekwencji <img style="margin: auto" width="20%" src="http://colah.github.io/posts/2015-08-Understanding-LSTMs/img/RNN-rolled.png"/> <img style="margin: auto" width="80%" src="http://colah.github.io/posts/2015-08-Understanding-LSTMs/img/RNN-unrolled.png"/> Zależności długodystansowe (Long-dist...
from IPython.display import YouTubeVideo YouTubeVideo("8BFzu9m52sc")
Wyklady/12/Deep Learning.ipynb
emjotde/UMZ
cc0-1.0
Import Libraries
# Importing necessary tensorflow library and printing the TF version. import tensorflow as tf print("TensorFlow version: ",tf.version.VERSION) import os # Here we'll import Pandas and Numpy data processing libraries import pandas as pd import numpy as np from datetime import datetime # Use matplotlib for visualizi...
courses/machine_learning/deepdive2/launching_into_ml/solutions/improve_data_quality.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Load the Dataset The dataset is based on California's Vehicle Fuel Type Count by Zip Code report. The dataset has been modified to make the data "untidy" and is thus a synthetic representation that can be used for learning purposes.
# Creating directory to store dataset if not os.path.isdir("../data/transport"): os.makedirs("../data/transport") # Download the raw .csv data by copying the data from a cloud storage bucket. !gsutil cp gs://cloud-training/mlongcp/v3.0_MLonGC/toy_data/untidy_vehicle_data_toy.csv ../data/transport # ls shows the w...
courses/machine_learning/deepdive2/launching_into_ml/solutions/improve_data_quality.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Read Dataset into a Pandas DataFrame Next, let's read in the dataset just copied from the cloud storage bucket and create a Pandas DataFrame. We also add a Pandas .head() function to show you the top 5 rows of data in the DataFrame. Head() and Tail() are "best-practice" functions used to investigate datasets.
# Reading "untidy_vehicle_data_toy.csv" file using the read_csv() function included in the pandas library. df_transport = pd.read_csv('../data/transport/untidy_vehicle_data_toy.csv') # Output the first five rows. df_transport.head()
courses/machine_learning/deepdive2/launching_into_ml/solutions/improve_data_quality.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
DataFrame Column Data Types DataFrames may have heterogenous or "mixed" data types, that is, some columns are numbers, some are strings, and some are dates etc. Because CSV files do not contain information on what data types are contained in each column, Pandas infers the data types when loading the data, e.g. if a col...
# The .info() function will display the concise summary of an dataframe. df_transport.info()
courses/machine_learning/deepdive2/launching_into_ml/solutions/improve_data_quality.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
From what the .info() function shows us, we have six string objects and one float object. We can definitely see more of the "string" object values now!
# Let's print out the first and last five rows of each column. print(df_transport,5)
courses/machine_learning/deepdive2/launching_into_ml/solutions/improve_data_quality.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Summary Statistics At this point, we have only one column which contains a numerical value (e.g. Vehicles). For features which contain numerical values, we are often interested in various statistical measures relating to those values. Note, that because we only have one numeric feature, we see only one summary stastic...
# We can use .describe() to see some summary statistics for the numeric fields in our dataframe. df_transport.describe()
courses/machine_learning/deepdive2/launching_into_ml/solutions/improve_data_quality.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Let's investigate a bit more of our data by using the .groupby() function.
# The .groupby() function is used for spliting the data into groups based on some criteria. grouped_data = df_transport.groupby(['Zip Code','Model Year','Fuel','Make','Light_Duty','Vehicles']) # Get the first entry for each month. df_transport.groupby('Fuel').first()
courses/machine_learning/deepdive2/launching_into_ml/solutions/improve_data_quality.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
What can we deduce about the data at this point? Let's summarize our data by row, column, features, unique, and missing values.
# In Python shape() is used in pandas to give the number of rows/columns. # The number of rows is given by .shape[0]. The number of columns is given by .shape[1]. # Thus, shape() consists of an array having two arguments -- rows and columns print ("Rows : " ,df_transport.shape[0]) print ("Columns : " ,df_transport...
courses/machine_learning/deepdive2/launching_into_ml/solutions/improve_data_quality.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Let's see the data again -- this time the last five rows in the dataset.
# Output the last five rows in the dataset. df_transport.tail()
courses/machine_learning/deepdive2/launching_into_ml/solutions/improve_data_quality.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
What Are Our Data Quality Issues? Data Quality Issue #1: Missing Values: Each feature column has multiple missing values. In fact, we have a total of 18 missing values. Data Quality Issue #2: Date DataType: Date is shown as an "object" datatype and should be a datetime. In addition, Date is in one column. Our...
# The isnull() method is used to check and manage NULL values in a data frame. # TODO 1a df_transport.isnull().sum()
courses/machine_learning/deepdive2/launching_into_ml/solutions/improve_data_quality.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Run the cell to apply the lambda function.
# Here we are using the apply function with lambda. # We can use the apply() function to apply the lambda function to both rows and columns of a dataframe. # TODO 1b df_transport = df_transport.apply(lambda x:x.fillna(x.value_counts().index[0]))
courses/machine_learning/deepdive2/launching_into_ml/solutions/improve_data_quality.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Let's check again for missing values.
# The isnull() method is used to check and manage NULL values in a data frame. # TODO 1c df_transport.isnull().sum()
courses/machine_learning/deepdive2/launching_into_ml/solutions/improve_data_quality.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Data Quality Issue #2: Convert the Date Feature Column to a Datetime Format
# The date column is indeed shown as a string object. We can convert it to the datetime datatype with the to_datetime() function in Pandas. # TODO 2a df_transport['Date'] = pd.to_datetime(df_transport['Date'], format='%m/%d/%Y') # Date is now converted and will display the concise summar...
courses/machine_learning/deepdive2/launching_into_ml/solutions/improve_data_quality.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Let's confirm the Date parsing. This will also give us a another visualization of the data.
# Here, we are creating a new dataframe called "grouped_data" and grouping by on the column "Make" grouped_data = df_transport.groupby(['Make']) # Get the first entry for each month. df_transport.groupby('month').first()
courses/machine_learning/deepdive2/launching_into_ml/solutions/improve_data_quality.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Now that we have Dates as a integers, let's do some additional plotting.
# Here we will visualize our data using the figure() function in the pyplot module of matplotlib's library -- which is used to create a new figure. plt.figure(figsize=(10,6)) # Seaborn's .jointplot() displays a relationship between 2 variables (bivariate) as well as 1D profiles (univariate) in the margins. This plot i...
courses/machine_learning/deepdive2/launching_into_ml/solutions/improve_data_quality.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Data Quality Issue #3: Rename a Feature Column and Remove a Value. Our feature columns have different "capitalizations" in their names, e.g. both upper and lower "case". In addition, there are "spaces" in some of the column names. In addition, we are only interested in years greater than 2006, not "<2006". We can a...
# Let's remove all the spaces for feature columns by renaming them. # TODO 3a df_transport.rename(columns = { 'Date': 'date', 'Zip Code':'zipcode', 'Model Year': 'modelyear', 'Fuel': 'fuel', 'Make': 'make', 'Light_Duty': 'lightduty', 'Vehicles': 'vehicles'}, inplace = True) # Output the first two rows. df_transport.h...
courses/machine_learning/deepdive2/launching_into_ml/solutions/improve_data_quality.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Note: Next we create a copy of the dataframe to avoid the "SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame" warning. Run the cell to remove the value '<2006' from the modelyear feature column.
# Here, we create a copy of the dataframe to avoid copy warning issues. # TODO 3b df = df_transport.loc[df_transport.modelyear != '<2006'].copy() # Here we will confirm that the modelyear value '<2006' has been removed by doing a value count. df['modelyear'].value_counts(0)
courses/machine_learning/deepdive2/launching_into_ml/solutions/improve_data_quality.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Data Quality Issue #4: Handling Categorical Columns The feature column "lightduty" is categorical and has a "Yes/No" choice. We cannot feed values like this into a machine learning model. We need to convert the binary answers from strings of yes/no to integers of 1/0. There are various methods to achieve this. We w...
# Lets count the number of "Yes" and"No's" in the 'lightduty' feature column. df['lightduty'].value_counts(0) # Let's convert the Yes to 1 and No to 0. # The .apply takes a function and applies it to all values of a Pandas series (e.g. lightduty). df.loc[:,'lightduty'] = df['lightduty'].apply(lambda x: 0 if x=='No' e...
courses/machine_learning/deepdive2/launching_into_ml/solutions/improve_data_quality.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
One-Hot Encoding Categorical Feature Columns Machine learning algorithms expect input vectors and not categorical features. Specifically, they cannot handle text or string values. Thus, it is often useful to transform categorical features into vectors. One transformation method is to create dummy variables for our cat...
# Making dummy variables for categorical data with more inputs. data_dummy = pd.get_dummies(df[['zipcode','modelyear', 'fuel', 'make']], drop_first=True) # Output the first five rows. data_dummy.head() # Merging (concatenate) original data frame with 'dummy' dataframe. # TODO 4a df = pd.concat([df,data_dummy], axis...
courses/machine_learning/deepdive2/launching_into_ml/solutions/improve_data_quality.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Data Quality Issue #5: Temporal Feature Columns Our dataset now contains year, month, and day feature columns. Let's convert the month and day feature columns to meaningful representations as a way to get us thinking about changing temporal features -- as they are sometimes overlooked. Note that the Feature Engineer...
# Let's print the unique values for "month", "day" and "year" in our dataset. print ('Unique values of month:',df.month.unique()) print ('Unique values of day:',df.day.unique()) print ('Unique values of year:',df.year.unique())
courses/machine_learning/deepdive2/launching_into_ml/solutions/improve_data_quality.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Don't worry, this is the last time we will use this code, as you can develop an input pipeline to address these temporal feature columns in TensorFlow and Keras - and it is much easier! But, sometimes you need to appreciate what you're not going to encounter as you move through the course! Run the cell to view the out...
# Here we map each temporal variable onto a circle such that the lowest value for that variable appears right next to the largest value. We compute the x- and y- component of that point using the sin and cos trigonometric functions. df['day_sin'] = np.sin(df.day*(2.*np.pi/31)) df['day_cos'] = np.cos(df.day*(2.*np.pi/31...
courses/machine_learning/deepdive2/launching_into_ml/solutions/improve_data_quality.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Wavelength Response Calculate the wavelength response functions with SunPy. EUV Channels: 94, 131, 171, 193, 211, 335, 304 $\mathrm{\mathring{A}}$
response = sunpy.instr.aia.Response(ssw_path='/Users/willbarnes/Documents/Rice/Research/ssw/') response.calculate_wavelength_response()
notebooks/sunpy_aia_response_tutorial.ipynb
wtbarnes/aia_response
mit
Peek at the wavelength response functions.
response.peek_wavelength_response()
notebooks/sunpy_aia_response_tutorial.ipynb
wtbarnes/aia_response
mit
Far UV and Visible Channels: 1600, 1700, and 4500 $\mathrm{\mathring{A}}$ We can also very easily calculate the wavelength response functions for the far UV and visible channels centered on 1600, 1700, and 4500 $\mathrm{\mathring{A}}$.
response_uv = sunpy.instr.aia.Response(ssw_path='/Users/willbarnes/Documents/Rice/Research/ssw/', channel_list=[1600,1700,4500]) response_uv.calculate_wavelength_response() response_uv.peek_wavelength_response()
notebooks/sunpy_aia_response_tutorial.ipynb
wtbarnes/aia_response
mit
Detailed Instrument Information If you'd like very detailed information about each channel on the instrument, you can also generate an astropy Table that lists all of the different instrument properties that go into the wavelength response calculation. <div class="alert alert-danger" role="alert"> <h1>Warning!</h1> ...
table=sunpy.instr.aia.aia_instr_properties_to_table([94,131,171,193,211,335,304,1600,1700,4500], ['/Users/willbarnes/Documents/Rice/Research/ssw/sdo/aia/response/aia_V6_all_fullinst.genx', '/Users/willbarnes/Documents/Rice/Research/ssw/sdo/aia/response/aia_V6_fuv...
notebooks/sunpy_aia_response_tutorial.ipynb
wtbarnes/aia_response
mit
Exercises Exercise: In the book, I present a different way to parameterize the quadratic model: $ \Delta p = r p (1 - p / K) $ where $r=\alpha$ and $K=-\alpha/\beta$. Write a version of update_func that implements this version of the model. Test it by computing the values of r and K that correspond to alpha=0.025, be...
# Solution system = System(t_0=t_0, t_end=t_end, p_0=p_0, alpha=0.025, beta=-0.0018) system.r = system.alpha system.K = -system.alpha/system.beta system.r, system.K # Solution def update_func_quad2(pop, t, system): """Compute the population next ...
soln/chap07soln.ipynb
AllenDowney/ModSimPy
mit
In this section, we consider the famous Gauss-Markov problem which will give us an opportunity to use all the material we have so far developed. The Gauss-Markov model is the fundamental model for noisy parameter estimation because it estimates unobservable parameters given a noisy indirect measurement. Incarnations o...
Q = np.eye(3)*0.1 # error covariance matrix # this is what we are trying estimate beta = matrix(ones((2,1))) W = matrix([[1,2], [2,3], [1,1]])
chapters/statistics/notebooks/Gauss_Markov.ipynb
unpingco/python_for_prob_stats_ml
mit
Then, we generate the noise terms and create the simulated data, $y$,
ntrials = 50 epsilon = np.random.multivariate_normal((0,0,0),Q,ntrials).T y=W*beta+epsilon
chapters/statistics/notebooks/Gauss_Markov.ipynb
unpingco/python_for_prob_stats_ml
mit
<!-- dom:FIGURE: [fig-statistics/Gauss_Markov_002.png, width=500 frac=0.85] Focusing on the *xy*-plane in [Figure](#fig:Gauss_Markov_001), the dashed line shows the true value for $\boldsymbol{\beta}$ versus the mean of the estimated values $\widehat{\boldsymbol{\beta}}_m$. <div id="fig:Gauss_Markov_002"></div> --> <!-...
# %matplotlib inline # from matplotlib.patches import Ellipse
chapters/statistics/notebooks/Gauss_Markov.ipynb
unpingco/python_for_prob_stats_ml
mit
To compute the parameters of the error ellipse based on the covariance matrix of the individual estimates of $\boldsymbol{\beta}$ in the bm_cov variable below,
# U,S,V = linalg.svd(bm_cov) # err = np.sqrt((matrix(bm))*(bm_cov)*(matrix(bm).T)) # theta = np.arccos(U[0,1])/np.pi*180
chapters/statistics/notebooks/Gauss_Markov.ipynb
unpingco/python_for_prob_stats_ml
mit
Then, we draw the add the scaled ellipse in the following,
# ax.add_patch(Ellipse(bm,err*2/np.sqrt(S[0]), # err*2/np.sqrt(S[1]), # angle=theta,color='gray')) from __future__ import division from mpl_toolkits.mplot3d import proj3d from numpy.linalg import inv import matplotlib.pyplot as plt import numpy as np from numpy import matrix, ...
chapters/statistics/notebooks/Gauss_Markov.ipynb
unpingco/python_for_prob_stats_ml
mit
Now reset the notebook's session kernel! Since we're no longer using Cloud Dataflow, we'll be using the python3 kernel from here on out so don't forget to change the kernel if it's still python2.
# Import helpful libraries and setup our project, bucket, and region import os import tensorflow as tf import tensorflow_hub as hub PROJECT = "cloud-training-demos" # REPLACE WITH YOUR PROJECT ID BUCKET = "cloud-training-demos-ml" # REPLACE WITH YOUR BUCKET NAME REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION...
courses/machine_learning/deepdive/10_recommend/labs/hybrid_recommendations/hybrid_recommendations.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Now we'll create our model function
# Create custom model function for our custom estimator def model_fn(features, labels, mode, params): # TODO: Create neural network input layer using our feature columns defined above # TODO: Create hidden layers by looping through hidden unit list # TODO: Compute logits (1 per class) using the output of ...
courses/machine_learning/deepdive/10_recommend/labs/hybrid_recommendations/hybrid_recommendations.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
http://seaborn.pydata.org/generated/seaborn.barplot.html#seaborn.barplot
import seaborn as sns; sns.set(color_codes=True) tips = sns.load_dataset("tips") ax = sns.barplot(x="day", y="total_bill", data=tips) ax = sns.barplot(x="day", y="total_bill", hue="sex", data=tips)
code/visualization_with_seaborn.ipynb
computational-class/cjc2016
mit
https://github.com/yufeiminds/echarts-python pip install echarts-python
from echarts import Echart, Legend, Bar, Axis, Line from IPython.display import HTML chart = Echart('GDP', 'This is a fake chart') chart.use(Bar('China', [2, 3, 4, 5])) chart.use(Legend(['GDP'])) chart.use(Axis('category', 'bottom', data=['Nov', 'Dec', 'Jan', 'Feb'])) chart = Echart('GDP', 'This is a fake chart') ch...
code/visualization_with_seaborn.ipynb
computational-class/cjc2016
mit
Load data Data source on Kaggle. The data is split into train and test sets. One wave file is one sample. We first load all wave files as a vector of integers into a lists of lists.
def read_dir(dirname, X, y, label): for fn in os.listdir(dirname): w = wavfile.read(os.path.join(dirname, fn)) assert w[0] == 16000 X.append(w[1]) y.append(label) X_train_cat = [] y_train_cat = [] X_test_cat = [] y_test_cat = [] X_train_dog = [] y_train_dog = [] X_test_dog =...
notebooks/bi_ea_demo/cat_dog.ipynb
juditacs/labor
lgpl-3.0
We discard some of the cat files to balance the classes, then merge the cat and dog matrices.
maxlen = min(len(X_train_cat), len(X_train_dog)) X_train = X_train_cat[:maxlen] X_train.extend(X_train_dog[:maxlen]) y_train = y_train_cat[:maxlen] y_train.extend(y_train_dog[:maxlen]) print(maxlen, len(X_train)) maxlen = min(len(X_test_cat), len(X_test_dog)) X_test = X_test_cat[:maxlen] X_test.extend(X_test_dog[:max...
notebooks/bi_ea_demo/cat_dog.ipynb
juditacs/labor
lgpl-3.0
Extract samples Wave files are too long and too few. Let's split them into smaller parts. One part is going to be 10000 sample long.
def create_padded_mtx(list_of_lists, labels, maxlen, pad=0): X = [] y = [] for i, x in enumerate(list_of_lists): x_mult = np.array_split(x, x.shape[0] // maxlen) for x in x_mult: pad_size = maxlen-x.shape[0] if pad_size > 0: pad = np.zeros((maxlen-l.sh...
notebooks/bi_ea_demo/cat_dog.ipynb
juditacs/labor
lgpl-3.0
Scale samples Wav samples vary in a large range, we prefere values closer to zero. StandardScaler scales all values so that the dataset has a mean of 0 and a standard deviation of 1. Note that we fit StandardScaler on the train data only and use those value to transform both the train and the test matrices.
scaler = StandardScaler() X_train = scaler.fit_transform(X_train) X_test = scaler.transform(X_test) np.mean(X_train), np.std(X_train), np.mean(X_test), np.std(X_test)
notebooks/bi_ea_demo/cat_dog.ipynb
juditacs/labor
lgpl-3.0
Shuffle data
shuf_idx = np.arange(X_train.shape[0]) np.random.shuffle(shuf_idx) X_train = X_train[shuf_idx] y_train = y_train[shuf_idx] X_train.shape, X_test.shape
notebooks/bi_ea_demo/cat_dog.ipynb
juditacs/labor
lgpl-3.0
The number of unique cat and dog samples:
cnt = np.unique(y_train, return_counts=True)[1] print("Number of vaus: {}\nNumber of mieuws: {}".format(cnt[0], cnt[1]))
notebooks/bi_ea_demo/cat_dog.ipynb
juditacs/labor
lgpl-3.0
Fully connected feed forward network Define the model
input_layer = Input(batch_shape=(None, X_train.shape[1])) layer = Dense(100, activation="sigmoid")(input_layer) # randomly disable 20% of the neurons, prevents or reduces overfitting layer = Dropout(.2)(layer) layer = Dense(100, activation="sigmoid")(input_layer) layer = Dropout(.2)(layer) layer = Dense(1, activation="...
notebooks/bi_ea_demo/cat_dog.ipynb
juditacs/labor
lgpl-3.0
Train the model Early stopping stops the training if validation loss does not decrease anymore.
%time ea = EarlyStopping(patience=2) model.fit(X_train, y_train, epochs=100, batch_size=32, verbose=1, validation_split=.1, callbacks=[ea])
notebooks/bi_ea_demo/cat_dog.ipynb
juditacs/labor
lgpl-3.0
Predict labels
pred = model.predict(X_test) labels = np.round(pred)
notebooks/bi_ea_demo/cat_dog.ipynb
juditacs/labor
lgpl-3.0
Precision, recall and F-score
prec, rec, F, _ = precision_recall_fscore_support(y_test, labels) print("Dog\n===========\nprec:{}\nrec:{}\nF-score:{}".format(prec[0], rec[0], F[0])) print("\nCat\n===========\nprec:{}\nrec:{}\nF-score:{}".format(prec[1], rec[1], F[1]))
notebooks/bi_ea_demo/cat_dog.ipynb
juditacs/labor
lgpl-3.0
Convolutional neural network CNNs are not only good at image processing but also at handling long temporal data such as audio files. Convert data to 3D tensors CNNs and RNNs require 3D tensors instead of 2D tensors (normal matrices). 3D tensors are usually shaped as batch_size x timestep x feature_number, where batch_s...
X_train_3d = X_train.reshape(X_train.shape[0], X_train.shape[1], 1) X_test_3d = X_test.reshape(X_test.shape[0], X_test.shape[1], 1)
notebooks/bi_ea_demo/cat_dog.ipynb
juditacs/labor
lgpl-3.0
Define the model
# list of convolutional layers, try adding more or changing the parameters conv_layers = [ {'filters': 200, 'kernel_size': 40, 'strides': 2, 'padding': "same", 'activation': "relu"}, {'filters': 200, 'kernel_size': 10, 'strides': 10, 'padding': "same", 'activation': "relu"}, {'filters': 50, 'kernel_size': 1...
notebooks/bi_ea_demo/cat_dog.ipynb
juditacs/labor
lgpl-3.0
Train the model
%%time ea = EarlyStopping(patience=2) hist = m.fit(X_train_3d, y_train, epochs=1000, batch_size=128, validation_split=.1, callbacks=[ea])
notebooks/bi_ea_demo/cat_dog.ipynb
juditacs/labor
lgpl-3.0
Predict labels
pred = m.predict(X_test_3d) labels = (pred > 0.5).astype(int)
notebooks/bi_ea_demo/cat_dog.ipynb
juditacs/labor
lgpl-3.0
Evaluate
prec, rec, F, _ = precision_recall_fscore_support(y_test, labels) print("Dog\n===========\nprec:{}\nrec:{}\nF-score:{}".format(prec[0], rec[0], F[0])) print("\nCat\n===========\nprec:{}\nrec:{}\nF-score:{}".format(prec[1], rec[1], F[1]))
notebooks/bi_ea_demo/cat_dog.ipynb
juditacs/labor
lgpl-3.0
Now let us write a small class implementing 1D semi-infinite leads. The class below is just a simple collection of the analytic formilae for spectrum, groupvelocity and the surface Green's function. Most functions have some meaningfull standard parameters (e.g.: hopping $\gamma=-1$, on-site potential $\varepsilon_0=0$)...
class lead1D: 'A class for simple 1D leads' def __init__(self,eps0=0,gamma=-1,**kwargs): 'We assume real hopping \gamma and onsite \epsilon_0 parameters!' self.eps0=eps0 self.gamma=gamma return def Ek(self,k): 'Spectrum as a function of k' return self.eps...
1D.ipynb
oroszl/mezo
gpl-3.0
Let us investigate the following simple scattering setups: For each system we shall write a small function that generates the scattering matrix of the problem as a function of the energy $E$ and other relevant parameters. We start by the tunnel junction where we only have $\alpha$ the hopping matrix element coupling t...
def Smat_tunnel(E,alpha): #Definition of the leads L1=lead1D() L2=lead1D() E=E+0.0000001j # In order to make stuff meaningfull for # outside of the band we add a tiny # imaginary part to the energy #Green's function of decoupled leads g0= array(...
1D.ipynb
oroszl/mezo
gpl-3.0
Now we write a small script tog generate a figure interactively depending on $\alpha$ so we can explore the parameterspace. We have also included an extra factor on top of the Fisher-Lee relations taking in if a channel is open or not.
energy_range=linspace(-4,4,1000) # this will be the plotted energy range figsize(8,6) # setting the figure size fts=20 # the default font is a bit too small so we make it larger #using the interact decorator we can have an interactive plot @interact(alpha=FloatSlider(min=-2,max=2,step=0.1,value=-1,description=r'...
1D.ipynb
oroszl/mezo
gpl-3.0
Similarly to the tunnel junction we start by a function that generates the scattering matrix. We have to be carefull since now the Green's function of the decoupled system is a $3\times3$ object.
def Smat_BW(E,t1,t2,eps1): #Definition of the leads L1=lead1D() L2=lead1D() E=E+0.000000001j # In order to make stuff meaningfull for # outside of the band we add a tiny # imaginary part to the energy #Green's function of decoupled system #Note ...
1D.ipynb
oroszl/mezo
gpl-3.0
Now we can write again a script for a nice interactive plot
energy_range=linspace(-4,4,1000) figsize(8,6) fts=20 @interact(t1=FloatSlider(min=-2,max=2,step=0.1,value=-1,description=r'$\gamma_L$'), t2=FloatSlider(min=-2,max=2,step=0.1,value=-1,description=r'$\gamma_R$'), eps1=FloatSlider(min=-2,max=2,step=0.1,value=0,description=r'$\varepsilon_1$')) def BW(t1...
1D.ipynb
oroszl/mezo
gpl-3.0
Exercises Using the above two examples write a a code that explores Fano resonance in a resonant level coupled sideways to a lead! Using the above examples write a code that explores the Aharonov-Bohm effect! Some more examples A simple potential step. The onsite potential in the right lead is shifted by $V_0$. Now i...
def Smat_step(E,V0): #Definition of the leads L1=lead1D() L2=lead1D(eps0=V0) E=E+0.0001j # In order to make stuff meaningfull for # outside of the band we add a tiny # imaginary part to the energy #Green's function of decoupled leads g0= array([...
1D.ipynb
oroszl/mezo
gpl-3.0
This is a simple model of a Fabry-Perot resonator realised by two tunnel barriers. This example also illustrates how we can have a larger scattering region.
def Smat_FP(E,t1,t2,N): # Definition of the leads L1=lead1D() L2=lead1D() E=E+0.000000001j # In order to make stuff meaningfull for # outside of the band we add a tiny # imaginary part to the energy # Green's function of decoupled system ...
1D.ipynb
oroszl/mezo
gpl-3.0
At this point my kernel crashed and I had to reload the data from CSV. This served as a natural break point, because the API heavy-lifting just finished.
#http://stackoverflow.com/questions/21269399/datetime-dtypes-in-pandas-read-csv col_dtypes = {"":int,"community_area":float,"completion_date":object, "creation_date":object,"latitude":float, "longitude":float,"service_request_number":object, "type_of_service_request":str,"URLS"...
Diagnostic/.ipynb_checkpoints/ML HW0-1-checkpoint.ipynb
anisfeld/MachineLearning
mit