markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
From this we see that the index is indeed using the timing information in the file, and we can see that the dtype is datetime. Selecting rows and columns of data In particular, we will select rows based on the index. Since in this example we are indexing by time, we can use human-readable notation to select based on da...
df['trip_distance']
materials/4_pandas.ipynb
hetland/python4geosciences
mit
We can equivalently access the columns of data as if they are methods. This means that we can use tab autocomplete to see methods and data available in a dataframe.
df.trip_distance
materials/4_pandas.ipynb
hetland/python4geosciences
mit
We can plot in this way, too:
df['trip_distance'].plot(figsize=(14,6))
materials/4_pandas.ipynb
hetland/python4geosciences
mit
Simple data selection One of the biggest benefits of using pandas is being able to easily reference the data in intuitive ways. For example, because we set up the index of the dataframe to be the date and time, we can pull out data using dates. In the following, we pull out all data from the first hour of the day:
df['2016-05-01 00']
materials/4_pandas.ipynb
hetland/python4geosciences
mit
Here we further subdivide to examine the passenger count during that time period:
df['passenger_count']['2016-05-01 00']
materials/4_pandas.ipynb
hetland/python4geosciences
mit
We can also access a range of data, for example any data rows from midnight until noon:
df['2016-05-01 00':'2016-05-01 11']
materials/4_pandas.ipynb
hetland/python4geosciences
mit
If you want more choice in your selection The following, adding on minutes, does not work:
df['2016-05-01 00:30']
materials/4_pandas.ipynb
hetland/python4geosciences
mit
However, we can use another approach to have more control, with .loc to access combinations of specific columns and/or rows, or subsets of columns and/or rows.
df.loc['2016-05-01 00:30']
materials/4_pandas.ipynb
hetland/python4geosciences
mit
You can also select data for more specific time periods. df.loc[row_label, col_label]
df.loc['2016-05-01 00:30', 'passenger_count']
materials/4_pandas.ipynb
hetland/python4geosciences
mit
You can select more than one column:
df.loc['2016-05-01 00:30', ['passenger_count','trip_distance']]
materials/4_pandas.ipynb
hetland/python4geosciences
mit
You can select a range of data:
df.loc['2016-05-01 00:30':'2016-05-01 01:30', ['passenger_count','trip_distance']]
materials/4_pandas.ipynb
hetland/python4geosciences
mit
You can alternatively select data by index instead of by label, using iloc instead of loc. Here we select the first 5 rows of data for all columns:
df.iloc[0:5, :]
materials/4_pandas.ipynb
hetland/python4geosciences
mit
Exercise Access the data from dataframe df for the last three hours of the day at once. Plot the tip amount (tip_amount) for this time period. After you can make a line plot, try making a histogram of the data. Play around with the data range and the number of bins. A number of plot types are available built-in to a p...
df = pd.read_csv('../data/yellow_tripdata_2016-05-01_decimated.csv', parse_dates=[0, 2], index_col=[0]) df.index df.index.strftime('%b %d, %Y %H:%m')
materials/4_pandas.ipynb
hetland/python4geosciences
mit
You can create and use datetimes using pandas. It will interpret the information you put into a string as best it can. Year-month-day is a good way to put in dates instead of using either American or European-specific ordering. After defining a pandas Timestamp, you can also change time using Timedelta.
now = pd.Timestamp('October 22, 2019 1:19PM') now tomorrow = pd.Timedelta('1 day') now + tomorrow
materials/4_pandas.ipynb
hetland/python4geosciences
mit
You can set up a range of datetimes to make your own data frame indices with the following. Codes for frequency are available.
pd.date_range(start='Jan 1 2019', end='May 1 2019', freq='15T')
materials/4_pandas.ipynb
hetland/python4geosciences
mit
Note that you can get many different measures of your time index.
df.index.minute df.index.dayofweek
materials/4_pandas.ipynb
hetland/python4geosciences
mit
Exercise How would you change the call to strftime above to format all of the indices such that the first index, for example, would be "the 1st of May, 2016 at the hour of 00 and the minute of 00 and the seconds of 00, which is the following day of the week: Sunday." Use the format codes for as many of the values as p...
df['tip squared'] = df.tip_amount**2 # making up some numbers to save to a new column df['tip squared'].plot()
materials/4_pandas.ipynb
hetland/python4geosciences
mit
Another example: Wind data Let's read in the wind data file that we have used before to have another data set to use. Note the parameters used to read it in properly.
df2 = pd.read_table('../data/burl1h2010.txt', header=0, skiprows=[1], delim_whitespace=True, parse_dates={'dates': ['#YY', 'MM', 'DD', 'hh']}, index_col=0) df2 df2.index
materials/4_pandas.ipynb
hetland/python4geosciences
mit
Plotting with pandas You can plot with matplotlib and control many things directly from pandas. Get more info about plotting from pandas dataframes directly from:
df.plot?
materials/4_pandas.ipynb
hetland/python4geosciences
mit
You can mix and match plotting with matplotlib by either setting up a figure and axes you want to use with calls to plot from your dataframe (which you input to the plot call), or you can start with a pandas plot and save an axes from that call. Each will be demonstrated next. Or, you can bring the pandas data to matpl...
import matplotlib.pyplot as plt fig, axes = plt.subplots(1, 2, figsize=(14,4)) df2['WSPD']['2010-5'].plot(ax=axes[0]) df2.loc['2010-5'].plot(y='WSPD', ax=axes[1])
materials/4_pandas.ipynb
hetland/python4geosciences
mit
Start with pandas, then use matplotlib commands The important part here is that the call to pandas dataframe plotting returns an axes handle which you can save; here, it is saved as "ax".
ax = df2['WSPD']['2010 11 1'].plot() ax.set_ylabel('Wind speed')
materials/4_pandas.ipynb
hetland/python4geosciences
mit
Bring pandas dataframe data to matplotlib fully You can also use matplotlib directly by pulling the data you want to plot out of your dataframe.
plt.plot(df2['WSPD'])
materials/4_pandas.ipynb
hetland/python4geosciences
mit
Plot all or multiple columns at once
# all df2.plot()
materials/4_pandas.ipynb
hetland/python4geosciences
mit
To plot more than one but less than all columns, give a list of column names. Here are two ways to do the same thing:
# multiple fig, axes = plt.subplots(1, 2, figsize=(14,4)) df2[['WSPD', 'GST']].plot(ax=axes[0]) df2.plot(y=['WSPD', 'GST'], ax=axes[1])
materials/4_pandas.ipynb
hetland/python4geosciences
mit
Formatting dates You can control how datetimes look on the x axis in these plots as demonstrated in this section. The formatting codes used in the call to DateFormatter are the same as those used above in this notebook for strftime. Note that you can also control all of this with minor ticks additionally.
ax = df2['WSPD'].plot(figsize=(14,4)) from matplotlib.dates import DateFormatter ax = df2['WSPD'].plot(figsize=(14,4)) ax.set_xlabel('2010') date_form = DateFormatter("%b %d") ax.xaxis.set_major_formatter(date_form) # import matplotlib.dates as mdates # # You can also control where the ticks are located, by date wi...
materials/4_pandas.ipynb
hetland/python4geosciences
mit
Plotting with twin axis You can very easily plot two variables with different y axis limits with the secondary_y keyword argument to df.plot.
axleft = df2['WSPD']['2010-10'].plot(figsize=(14,4)) axright = df2['WDIR']['2010-10'].plot(secondary_y=True, alpha=0.5) axleft.set_ylabel('Speed [m/s]', color='blue'); axright.set_ylabel('Dir [degrees]', color='orange');
materials/4_pandas.ipynb
hetland/python4geosciences
mit
Resampling Sometimes we want our data to be at a different sampling frequency that we have, that is, we want to change the time between rows or observations. Changing this is called resampling. We can upsample to increase the number of data points in a given dataset (or decrease the period between points) or we can dow...
df2.resample('1d').max() #['DEWP'] # now the data is daily
materials/4_pandas.ipynb
hetland/python4geosciences
mit
It's always important to check our results to make sure they look reasonable. Let's plot our resampled data with the original data to make sure they align well. We'll choose one variable for this check. We can see that the daily max wind gust does indeed look like the max value for each day, though note that it is plot...
df2['GST']['2010-4-1':'2010-4-5'].plot() df2.resample('1d').max()['GST']['2010-4-1':'2010-4-5'].plot()
materials/4_pandas.ipynb
hetland/python4geosciences
mit
We can also upsample our data or add more rows of data. Note that like before, after we resample our data we still need a method on the end telling pandas how to process the data. However, since in this case we are not combining data (downsampling) but are adding more rows (upsampling), using a function like max doesn'...
df2.resample('30min').max() # max doesn't say what to do with data in new rows
materials/4_pandas.ipynb
hetland/python4geosciences
mit
When upsampling, a reasonable option is to fill the new rows with data from the previous existing row:
df2.resample('30min').ffill()
materials/4_pandas.ipynb
hetland/python4geosciences
mit
Here we upsample to have data every 15 minutes, but we interpolate to fill in the data between. This is a very useful thing to be able to do.
df2.resample('15 T').interpolate()
materials/4_pandas.ipynb
hetland/python4geosciences
mit
The codes for time period/frequency are available and are presented here for convenience: Alias Description B business day frequency C custom business day frequency (experimental) D calendar day frequency W weekly frequency M month end frequency SM semi-month end frequency (15th and end of month) BM busin...
df3 = pd.read_table('http://pong.tamu.edu/tabswebsite/daily/tabs_V_salt_all', index_col=0, parse_dates=True) df3 ax = df3.groupby(df3.index.month).aggregate(np.mean)['Salinity'].plot(color='k', grid=True, figsize=(14, 4), marker='o') # the x axis is now showing month of the year, which is what we aggregated over ax.s...
materials/4_pandas.ipynb
hetland/python4geosciences
mit
Using httpbin.org: https://httpbin.org/delay/1
import requests def requests_get(index=None): response = requests.get("https://httpbin.org/delay/1") response.raise_for_status() print(f"{index} - {response.status_code} - {response.elapsed}") requests_get() before = datetime.now() for index in range(0, 5): requests_get(index) after = datetime....
HTTPX/HTTPX.ipynb
CLEpy/CLEpy-MotM
mit
We may now define a parametrized function using JAX. This will allow us to efficiently compute gradients. There are a number of libraries that provide common building blocks for parametrized functions (such as flax and haiku). For this case though, we shall implement our function from scratch. Our function will be a 1-...
initial_params = { 'hidden': jax.random.normal(shape=[8, 32], key=jax.random.PRNGKey(0)), 'output': jax.random.normal(shape=[32, 2], key=jax.random.PRNGKey(1)), } def net(x: jnp.ndarray, params: jnp.ndarray) -> jnp.ndarray: x = jnp.dot(x, params['hidden']) x = jax.nn.relu(x) x = jnp.dot(x, params['outpu...
docs/optax-101.ipynb
deepmind/optax
apache-2.0
We will use optax.adam to compute the parameter updates from their gradients on each optimizer step. Note that since optax optimizers are implemented using pure functions, we will need to also keep track of the optimizer state. For the Adam optimizer, this state will contain the momentum values.
def fit(params: optax.Params, optimizer: optax.GradientTransformation) -> optax.Params: opt_state = optimizer.init(params) @jax.jit def step(params, opt_state, batch, labels): loss_value, grads = jax.value_and_grad(loss)(params, batch, labels) updates, opt_state = optimizer.update(grads, opt_state, param...
docs/optax-101.ipynb
deepmind/optax
apache-2.0
We see that our loss appears to have converged, which should indicate that we have successfully found better parameters for our network Weight Decay, Schedules and Clipping Many research models make use of techniques such as learning rate scheduling, and gradient clipping. These may be achieved by chaining together gra...
schedule = optax.warmup_cosine_decay_schedule( init_value=0.0, peak_value=1.0, warmup_steps=50, decay_steps=1_000, end_value=0.0, ) optimizer = optax.chain( optax.clip(1.0), optax.adamw(learning_rate=schedule), ) params = fit(initial_params, optimizer)
docs/optax-101.ipynb
deepmind/optax
apache-2.0
A Shift-Reduce Parser for Arithmetic Expressions In this notebook we implement a generic shift reduce parser. The parse table that we use implements the following grammar for arithmetic expressions: $$ \begin{eqnarray} \mathrm{expr} & \rightarrow & \mathrm{expr}\;\;\texttt{'+'}\;\;\mathrm{product} \ ...
import re
Python/Shift-Reduce-Parser-Pure.ipynb
karlstroetmann/Formal-Languages
gpl-2.0
The function tokenize scans the string s into a list of tokens using Python's regular expressions. The scanner distinguishes between * whitespace, which is discarded, * numbers, * arithmetical operators and parenthesis, * all remaining characters, which are treated as lexical errors. See below for an example.
def tokenize(s): '''Transform the string s into a list of tokens. The string s is supposed to represent an arithmetic expression. ''' lexSpec = r'''([ \t\n]+) | # blanks and tabs ([1-9][0-9]*|0) | # number ([-+*/()]) | # arithmetical operators and par...
Python/Shift-Reduce-Parser-Pure.ipynb
karlstroetmann/Formal-Languages
gpl-2.0
Assume a grammar $G = \langle V, T, R, S \rangle$ is given. A shift-reduce parser is defined as a 4-Tuple $$P = \langle Q, q_0, \texttt{action}, \texttt{goto} \rangle$$ where - $Q$ is the set of states of the shift-reduce parser. For the purpose of the shift-reduce-parser, states are purely abstract. - $q_0 \in Q$...
class ShiftReduceParser(): def __init__(self, actionTable, gotoTable): self.mActionTable = actionTable self.mGotoTable = gotoTable
Python/Shift-Reduce-Parser-Pure.ipynb
karlstroetmann/Formal-Languages
gpl-2.0
The method parse takes a list of tokens TL as its argument. It returns True if the token list can be parsed successfully or False otherwise. It algorithm that is applied is known as shift/reduce parsing.
def parse(self, TL): index = 0 # points to next token Symbols = [] # stack of symbols States = ['s0'] # stack of states, s0 is start state TL += ['EOF'] while True: q = States[-1] t = TL[index] # Below, an undefined table entry is interpreted as an error entry...
Python/Shift-Reduce-Parser-Pure.ipynb
karlstroetmann/Formal-Languages
gpl-2.0
Details of the "Happy" dataset: - Images are of shape (64,64,3) - Training: 600 pictures - Test: 150 pictures It is now time to solve the "Happy" Challenge. 2 - Building a model in Keras Keras is very good for rapid prototyping. In just a short time you will be able to build a model that achieves outstanding results. H...
# GRADED FUNCTION: HappyModel def HappyModel(input_shape): """ Implementation of the HappyModel. Arguments: input_shape -- shape of the images of the dataset Returns: model -- a Model() instance in Keras """ ### START CODE HERE ### # Define the input placeholder as a tens...
deep-learnining-specialization/4. Convolutional Neural Networks/week2/Keras+-+Tutorial+-+Happy+House+v2.ipynb
diegocavalca/Studies
cc0-1.0
4. Find a reasonable threshold to say exposure is high and recode the data
df['High_Exposure'] = df['Exposure'].apply(lambda x:1 if x > 3.41 else 0)
class7/donow/hon_jingyi_donow_7.ipynb
ledeprogram/algorithms
gpl-3.0
5. Create a logistic regression model
lm = LogisticRegression() x = np.asarray(dataset[['Mortality']]) y = np.asarray(dataset['Exposure']) lm = lm.fit(x,y)
class7/donow/hon_jingyi_donow_7.ipynb
ledeprogram/algorithms
gpl-3.0
# Créer et manipuler des Tensors Objectifs de formation : * Initialiser et affecter des objets Variable TensorFlow * Créer et manipuler des Tensors * Rafraîchir ses connaissances sur les opérations de somme et de produit en algèbre linéaire (lecture conseillée de l'introduction à l'addition et au produit matricie...
from __future__ import print_function import tensorflow as tf
ml/cc/prework/fr/creating_and_manipulating_tensors.ipynb
google/eng-edu
apache-2.0
## Somme vectorielle Vous pouvez réaliser de nombreuses opérations mathématiques standards sur les Tensors (reportez-vous à l'API TensorFlow). Le code suivant permet de créer et de manipuler deux vecteurs (Tensors à une dimension), constitués chacun de six éléments :
with tf.Graph().as_default(): # Create a six-element vector (1-D tensor). primes = tf.constant([2, 3, 5, 7, 11, 13], dtype=tf.int32) # Create another six-element vector. Each element in the vector will be # initialized to 1. The first argument is the shape of the tensor (more # on shapes below). ones = tf....
ml/cc/prework/fr/creating_and_manipulating_tensors.ipynb
google/eng-edu
apache-2.0
### Formats de Tensor Le format caractérise la taille et le nombre de dimensions d'un Tensor. Il est indiqué sous la forme d'une liste, où le ie élément désigne la taille par rapport à la dimension i. La longueur de la liste indique le rang du Tensor (c'est-à-dire le nombre de dimensions). Pour en savoir plus, consulte...
with tf.Graph().as_default(): # A scalar (0-D tensor). scalar = tf.zeros([]) # A vector with 3 elements. vector = tf.zeros([3]) # A matrix with 2 rows and 3 columns. matrix = tf.zeros([2, 3]) with tf.Session() as sess: print('scalar has shape', scalar.get_shape(), 'and value:\n', scalar.eval()) ...
ml/cc/prework/fr/creating_and_manipulating_tensors.ipynb
google/eng-edu
apache-2.0
### Broadcasting En mathématiques, les Tensors de format identique peuvent subir uniquement des opérations au niveau de l'élément (opérations ajouter et égal, par exemple). Dans TensorFlow, en revanche, il est possible de réaliser des opérations traditionnellement incompatibles. ce modèle autorise ainsi le broadcasting...
with tf.Graph().as_default(): # Create a six-element vector (1-D tensor). primes = tf.constant([2, 3, 5, 7, 11, 13], dtype=tf.int32) # Create a constant scalar with value 1. ones = tf.constant(1, dtype=tf.int32) # Add the two tensors. The resulting tensor is a six-element vector. just_beyond_primes = tf.a...
ml/cc/prework/fr/creating_and_manipulating_tensors.ipynb
google/eng-edu
apache-2.0
## Produit matriciel En algèbre linéaire, lorsque vous calculez le produit de deux matrices, le nombre de colonnes dans la première doit être égal au nombre de lignes dans la seconde. Une matrice 3x4 peut être multipliée par une matrice 4x2. Vous obtiendrez une matrice 3x2. Une matrice 4x2 ne peut pas être multipliée ...
with tf.Graph().as_default(): # Create a matrix (2-d tensor) with 3 rows and 4 columns. x = tf.constant([[5, 2, 4, 3], [5, 1, 6, -2], [-1, 3, -1, -2]], dtype=tf.int32) # Create a matrix with 4 rows and 2 columns. y = tf.constant([[2, 2], [3, 5], [4, 5], [1, 6]], dtype=tf.int32) # Multiply ...
ml/cc/prework/fr/creating_and_manipulating_tensors.ipynb
google/eng-edu
apache-2.0
## Modification du format des Tensors La somme de Tensors et le produit matriciel sont deux opérations qui imposent des contraintes spécifiques aux opérandes, obligeant ainsi les programmeurs TensorFlow à modifier régulièrement le format des Tensors. La méthode tf.reshape permet de modifier le format d'un Tensor. Ain...
with tf.Graph().as_default(): # Create an 8x2 matrix (2-D tensor). matrix = tf.constant([[1,2], [3,4], [5,6], [7,8], [9,10], [11,12], [13, 14], [15,16]], dtype=tf.int32) # Reshape the 8x2 matrix into a 2x8 matrix. reshaped_2x8_matrix = tf.reshape(matrix, [2,8]) # Reshape the 8x2 ma...
ml/cc/prework/fr/creating_and_manipulating_tensors.ipynb
google/eng-edu
apache-2.0
Vous pouvez également utiliser tf.reshape pour modifier le nombre de dimensions (le \'rang\') d'un Tensor. Par exemple, le même Tensor 8x2 peut être converti en Tensor 2x2x4 à trois dimensions ou en Tensor une dimension de 16 éléments.
with tf.Graph().as_default(): # Create an 8x2 matrix (2-D tensor). matrix = tf.constant([[1,2], [3,4], [5,6], [7,8], [9,10], [11,12], [13, 14], [15,16]], dtype=tf.int32) # Reshape the 8x2 matrix into a 3-D 2x2x4 tensor. reshaped_2x2x4_tensor = tf.reshape(matrix, [2,2,4]) # Reshape ...
ml/cc/prework/fr/creating_and_manipulating_tensors.ipynb
google/eng-edu
apache-2.0
### Exercice n° 1 : Modifier le format de deux Tensors pour les multiplier L'opération de produit matriciel est impossible sur les deux vecteurs suivants : a = tf.constant([5, 3, 2, 7, 1, 4]) b = tf.constant([4, 6, 3]) Modifiez leur format pour les convertir en opérandes compatibles avec l'opération de produit matric...
# Write your code for Task 1 here.
ml/cc/prework/fr/creating_and_manipulating_tensors.ipynb
google/eng-edu
apache-2.0
### Solution Cliquez ci-dessous pour afficher la solution.
with tf.Graph().as_default(), tf.Session() as sess: # Task: Reshape two tensors in order to multiply them # Here are the original operands, which are incompatible # for matrix multiplication: a = tf.constant([5, 3, 2, 7, 1, 4]) b = tf.constant([4, 6, 3]) # We need to reshape at least one of these operand...
ml/cc/prework/fr/creating_and_manipulating_tensors.ipynb
google/eng-edu
apache-2.0
## Variables, initialisation et affectation Les opérations réalisées jusqu'à maintenant portaient uniquement sur des valeurs statiques (tf.constant). L'appel de la méthode eval() renvoyait systématiquement le même résultat. Avec TensorFlow, vous pouvez définir des objets Variable, dont la valeur peut changer. Lors de ...
g = tf.Graph() with g.as_default(): # Create a variable with the initial value 3. v = tf.Variable([3]) # Create a variable of shape [1], with a random initial value, # sampled from a normal distribution with mean 1 and standard deviation 0.35. w = tf.Variable(tf.random_normal([1], mean=1.0, stddev=0.35))
ml/cc/prework/fr/creating_and_manipulating_tensors.ipynb
google/eng-edu
apache-2.0
L'une des particularités de TensorFlow est que l'initialisation des variables n'est pas automatique. Ainsi, le bloc suivant renverra une erreur :
with g.as_default(): with tf.Session() as sess: try: v.eval() except tf.errors.FailedPreconditionError as e: print("Caught expected error: ", e)
ml/cc/prework/fr/creating_and_manipulating_tensors.ipynb
google/eng-edu
apache-2.0
Le plus simple pour initialiser une variable consiste à appeler global_variables_initializer. La méthode Session.run() employée ici équivaut à eval(), à peu de chose près.
with g.as_default(): with tf.Session() as sess: initialization = tf.global_variables_initializer() sess.run(initialization) # Now, variables can be accessed normally, and have values assigned to them. print(v.eval()) print(w.eval())
ml/cc/prework/fr/creating_and_manipulating_tensors.ipynb
google/eng-edu
apache-2.0
Une fois initialisées, les variables conservent leur valeur pour toute la session (il convient de les réinitialiser au démarrage d'une nouvelle session) :
with g.as_default(): with tf.Session() as sess: sess.run(tf.global_variables_initializer()) # These three prints will print the same value. print(w.eval()) print(w.eval()) print(w.eval())
ml/cc/prework/fr/creating_and_manipulating_tensors.ipynb
google/eng-edu
apache-2.0
Pour modifier la valeur d'une variable, utilisez l'opération assign. Créer simplement cette opération n'a aucun effet. Comme pour l'initialisation, vous devez exécuter l'opération d'affectation (via run) pour pouvoir mettre à jour la valeur de la variable :
with g.as_default(): with tf.Session() as sess: sess.run(tf.global_variables_initializer()) # This should print the variable's initial value. print(v.eval()) assignment = tf.assign(v, [7]) # The variable has not been changed yet! print(v.eval()) # Execute the assignment op. sess.run(...
ml/cc/prework/fr/creating_and_manipulating_tensors.ipynb
google/eng-edu
apache-2.0
Chargement, stockage… les thématiques autour des variables ne manquent pas. Pour en savoir plus sur un sujet non abordé dans cette formation, consultez la documentation TensorFlow. ### Exercice n° 2 : Simuler 10 lancers de deux dés Simulez un lancer de dés, qui génère un Tensor 10x3 à deux dimensions : Les colonnes 1 ...
# Write your code for Task 2 here.
ml/cc/prework/fr/creating_and_manipulating_tensors.ipynb
google/eng-edu
apache-2.0
### Solution Cliquez ci-dessous pour afficher la solution.
with tf.Graph().as_default(), tf.Session() as sess: # Task 2: Simulate 10 throws of two dice. Store the results # in a 10x3 matrix. # We're going to place dice throws inside two separate # 10x1 matrices. We could have placed dice throws inside # a single 10x2 matrix, but adding different columns of # the s...
ml/cc/prework/fr/creating_and_manipulating_tensors.ipynb
google/eng-edu
apache-2.0
Fine tuning the model using GridSearch
from sklearn.svm import SVC from sklearn.cross_validation import cross_val_score from sklearn.pipeline import Pipeline from sklearn import grid_search knn = KNeighborsClassifier() parameters = {'n_neighbors':[1,]} grid = grid_search.GridSearchCV(knn, parameters, n_jobs=-1, verbose=1, scoring='accuracy') grid.fit(X...
Miscellaneous/Lenses Data Classification.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
cross validation for SVM
tt7=time() print "cross result========" scores = cross_validation.cross_val_score(svc,X,y, cv=5) print scores print scores.mean() tt8=time() print "time elapsed: ", tt7-tt6 print "\n" from sklearn.svm import SVC from sklearn.cross_validation import cross_val_score from sklearn.pipeline import Pipeline from sklearn imp...
Miscellaneous/Lenses Data Classification.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Unsupervised Learning
features = ['Age', 'Specs', 'Astigmatic', 'Tear-Production-Rate'] df1 = df[features] df1.head()
Miscellaneous/Lenses Data Classification.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
PCA
# Apply PCA with the same number of dimensions as variables in the dataset from sklearn.decomposition import PCA pca = PCA(n_components=4) # 6 components for 6 variables pca.fit(df1) # Print the components and the amount of variance in the data contained in each dimension print(pca.components_) print(pca.explained_var...
Miscellaneous/Lenses Data Classification.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Clustering
# Import clustering modules from sklearn.cluster import KMeans from sklearn.mixture import GMM # First we reduce the data to two dimensions using PCA to capture variation pca = PCA(n_components=2) reduced_data = pca.fit_transform(df1) print(reduced_data[:10]) # print upto 10 elements # Implement your clustering algo...
Miscellaneous/Lenses Data Classification.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Elbow Method
distortions = [] for i in range(1, 11): km = KMeans(n_clusters=i, init='k-means++', n_init=10, max_iter=300, random_state=0) km.fit(X) distortions .append(km.inertia_) plt.plot(range(1,11), distortions , marker='o') plt.xlabel('Number of cl...
Miscellaneous/Lenses Data Classification.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Quantifying the quality of clustering via silhouette plots
import numpy as np from matplotlib import cm from sklearn.metrics import silhouette_samples km = KMeans(n_clusters=3, init='k-means++', n_init=10, max_iter=300, tol=1e-04, random_state=0) y_km = km.fit_predict(X) cluster_labels = np.unique(y_km) n_cluster...
Miscellaneous/Lenses Data Classification.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Our clustering with 3 centroids is good. Bad Clustering:
km = KMeans(n_clusters=4, init='k-means++', n_init=10, max_iter=300, tol=1e-04, random_state=0) y_km = km.fit_predict(X) cluster_labels = np.unique(y_km) n_clusters = cluster_labels.shape[0] silhouette_vals = silhouette_samples(X, y_km, metric='euclidean')...
Miscellaneous/Lenses Data Classification.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Organizing clusters as a hierarchical tree Performing hierarchical clustering on a distance matrix To calculate the distance matrix as input for the hierarchical clustering algorithm, we will use the pdist function from SciPy's spatial.distance submodule:
labels = [] for i in range(df1.shape[0]): str = 'ID_{}'.format(i) labels.append(str) from scipy.spatial.distance import pdist,squareform row_dist = pd.DataFrame(squareform(pdist(df1, metric='euclidean')), columns=labels, index=labels) row_dist[:5] # 1. incorrect approach: Squareform distance matrix from sci...
Miscellaneous/Lenses Data Classification.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
As shown in the following table, the linkage matrix consists of several rows where each row represents one merge. The first and second columns denote the most dissimilar members in each cluster, and the third row reports the distance between those members. The last column returns the count of the members in each cluste...
from scipy.cluster.hierarchy import dendrogram # make dendrogram black (part 1/2) # from scipy.cluster.hierarchy import set_link_color_palette # set_link_color_palette(['black']) row_dendr = dendrogram(row_clusters, labels=labels, # make dendrogram black (part 2/2) ...
Miscellaneous/Lenses Data Classification.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Applying agglomerative clustering via scikit-learn
from sklearn.cluster import AgglomerativeClustering ac = AgglomerativeClustering(n_clusters=3, affinity='euclidean', linkage='complete') labels = ac.fit_predict(X) print('Cluster labels: %s' % labels)
Miscellaneous/Lenses Data Classification.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
from sklearn.cross_validation import train_test_split X = df[features] y = df['Target-Lenses'] X_train, X_test, y_train, y_test = train_test_split(X.values, y.values ,test_size=0.25, random_state=42) from sklearn import cluster clf = cluster.KMeans(init='k-means++', n_clusters=3, random_state=5) clf.fit(X_train) print...
Miscellaneous/Lenses Data Classification.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Affinity Propogation
# Affinity propagation aff = cluster.AffinityPropagation() aff.fit(X_train) print aff.cluster_centers_indices_.shape y_pred = aff.predict(X_test) from sklearn import metrics print "Addjusted rand score:{:.2}".format(metrics.adjusted_rand_score(y_test, y_pred)) print "Homogeneity score:{:.2} ".format(metrics.homogenei...
Miscellaneous/Lenses Data Classification.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Mixture of Guassian Models
from sklearn import mixture # Define a heldout dataset to estimate covariance type X_train_heldout, X_test_heldout, y_train_heldout, y_test_heldout = train_test_split( X_train, y_train,test_size=0.25, random_state=42) for covariance_type in ['spherical','tied','diag','full']: gm=mixture.GMM(n_components=3,...
Miscellaneous/Lenses Data Classification.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Matplotlib Introduction Matplotlib is a library for producing publication-quality figures. mpl (for short) was designed from the beginning to serve two purposes. First, allow for interactive, cross-platform control of figures and plots, and second, to make it very easy to produce static raster or vector graphics files ...
import matplotlib print(matplotlib.__version__) print(matplotlib.get_backend())
resources/matplotlib/AnatomyOfMatPlotLib/AnatomyOfMatplotlib-Part1-Figures_Subplots_and_layouts.ipynb
BrainIntensive/OnlineBrainIntensive
mit
Normally we wouldn't need to think about this too much, but IPython/Jupyter notebooks behave a touch differently than "normal" python. Inside of IPython, it's often easiest to use the Jupyter nbagg or notebook backend. This allows plots to be displayed and interacted with in the browser in a Jupyter notebook. Otherwi...
matplotlib.use('nbagg')
resources/matplotlib/AnatomyOfMatPlotLib/AnatomyOfMatplotlib-Part1-Figures_Subplots_and_layouts.ipynb
BrainIntensive/OnlineBrainIntensive
mit
On with the show! Matplotlib is a large project and can seem daunting at first. However, by learning the components, it should begin to feel much smaller and more approachable. Anatomy of a "Plot" People use "plot" to mean many different things. Here, we'll be using a consistent terminology (mirrored by the names of t...
import numpy as np import matplotlib.pyplot as plt
resources/matplotlib/AnatomyOfMatPlotLib/AnatomyOfMatplotlib-Part1-Figures_Subplots_and_layouts.ipynb
BrainIntensive/OnlineBrainIntensive
mit
Overview Time-series forecasting problems are ubiquitous throughout the business world and can be posed as a supervised machine learning problem. A common approach to creating features and labels is to use a sliding window where the features are historical entries and the label(s) represent entries in the future. As a...
!pip3 install pandas-gbq %%bash git clone https://github.com/GoogleCloudPlatform/training-data-analyst.git \ --depth 1 cd training-data-analyst/blogs/gcp_forecasting
blogs/gcp_forecasting/gcp_time_series_forecasting.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
After cloning the above repo we can important pandas and our custom module time_series.py.
%matplotlib inline import pandas as pd import pandas_gbq as gbq from sklearn.ensemble import RandomForestRegressor, GradientBoostingRegressor from sklearn.linear_model import Ridge import time_series # Allow you to easily have Python variables in SQL query. from IPython.core.magic import register_cell_magic from IPy...
blogs/gcp_forecasting/gcp_time_series_forecasting.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
For this demo we will be using New York City real estate data obtained from nyc.gov. This public dataset starts in 2003. The data can be loaded into BigQuery with the following code:
dfr = pd.read_csv('https://storage.googleapis.com/asl-testing/data/nyc_open_data_real_estate.csv') # Upload to BigQuery. PROJECT = "[your-project-id]" DATASET = 'nyc_real_estate' TABLE = 'residential_sales' BUCKET = "[your-bucket]" # Used later. gbq.to_gbq(dfr, '{}.{}'.format(DATASET, TABLE), PROJECT, if_exists='rep...
blogs/gcp_forecasting/gcp_time_series_forecasting.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Since we are just doing local modeling, let's just use a subsample of the data. Later we will train on all of the data in the cloud.
SOURCE_TABLE = TABLE FILTER = '''residential_units = 1 AND sale_price > 10000 AND sale_date > TIMESTAMP('2010-12-31 00:00:00')''' %%with_globals %%bigquery --project {PROJECT} df SELECT borough, neighborhood, building_class_category, tax_class_at_present, block, lot, ease_ment, building_class_at_pres...
blogs/gcp_forecasting/gcp_time_series_forecasting.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
The most sales are from the upper west side, midtown west, and the upper east side.
ax = df.set_index('neighborhood').cnt\ .tail(10)\ .plot(kind='barh'); ax.set_xlabel('total sales');
blogs/gcp_forecasting/gcp_time_series_forecasting.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
SOHO and Civic Center are the most expensive neighborhoods.
%%with_globals %%bigquery --project {PROJECT} df SELECT neighborhood, APPROX_QUANTILES(sale_price, 100)[ OFFSET (50)] AS median_price FROM {SOURCE_TABLE} WHERE {FILTER} GROUP BY neighborhood ORDER BY median_price ax = df.set_index('neighborhood').median_price\ .tail(10)\ .plot(kind='barh'); ax.se...
blogs/gcp_forecasting/gcp_time_series_forecasting.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Build features Let's create features for building a machine learning model: Aggregate median sales for each week. Prices are noisy and by grouping by week, we will smooth out irregularities. Create a rolling window to split the single long time series into smaller windows. One feature vector will contain a single wind...
%%with_globals %%bigquery --project asl-testing-217717 df SELECT sale_week, APPROX_QUANTILES(sale_price, 100)[ OFFSET (50)] AS median_price FROM ( SELECT TIMESTAMP_TRUNC(sale_date, week) AS sale_week, sale_price FROM {SOURCE_TABLE} WHERE {FILTER}) GROUP BY sale_week ORDER BY sale_week s...
blogs/gcp_forecasting/gcp_time_series_forecasting.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Sliding window Let's create our features. We will use the create_rolling_features_label function that automatically creates the features/label setup. Create the features and labels.
WINDOW_SIZE = 52 * 1 HORIZON = 4*6 MONTHS = 0 WEEKS = 1 LABELS_SIZE = 1 df = time_series.create_rolling_features_label(sales, window_size=WINDOW_SIZE, pred_offset=HORIZON) df = time_series.add_date_features(df, df.index, months=MONTHS, weeks=WEEKS) df.head()
blogs/gcp_forecasting/gcp_time_series_forecasting.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Let's train our model using all weekly median prices from 2003 -- 2015. Then we will test our model's performance on prices from 2016 -- 2018
# Features, label. X = df.drop('label', axis=1) y = df['label'] # Train/test split. Splitting on time. train_ix = time_series.is_between_dates(y.index, end='2015-12-30') test_ix = time_series.is_between_dates(y.index, start='2015-12-30', ...
blogs/gcp_forecasting/gcp_time_series_forecasting.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Z-score normalization for the features for training.
mean = X_train.mean() std = X_train.std() def zscore(X): return (X-mean)/std X_train = zscore(X_train) X_test = zscore(X_test)
blogs/gcp_forecasting/gcp_time_series_forecasting.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Initial model Baseline Build naive model that just uses the mean of training set.
df_baseline = y_test.to_frame(name='label') df_baseline['pred'] = y_train.mean() # Join mean predictions with test labels. baseline_global_metrics = time_series.Metrics(df_baseline.pred, df_baseline.label) baseline_global_metrics.report("Global Baseline Model") # Train mo...
blogs/gcp_forecasting/gcp_time_series_forecasting.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
The regression model performs 35% better than the baseline model. Observations: * Linear Regression does okay for this dataset (Regularization helps generalize the model) * Random Forest is better -- doesn't require a lot of tuning. It performs a bit better than regression. * Gradient Boosting does do better than regre...
# Data frame to query for plotting df_res = pd.DataFrame({'pred': pred, 'baseline': df_baseline.pred, 'y_test': y_test}) metrics = time_series.Metrics(df_res.y_test, df_res.pred) ax = df_res.iloc[:].plot(y=[ 'pred', 'y_test'], style=['b-','k-'], figsize=(10,5)) ax.set_title('rmse: {:2....
blogs/gcp_forecasting/gcp_time_series_forecasting.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
BigQuery modeling We have observed there is signal in our data and our smaller, local model is working better. Let's scale this model out to the cloud. Let's train a BigQuery Machine Learning (BQML) on the full dataset. Set up your GCP project The following steps are required, regardless of your notebook environment. ...
# Import BigQuery module from google.cloud import bigquery # Import external custom module containing SQL queries import scalable_time_series # Define hyperparameters value_name = "med_sales_price" downsample_size = 7 # 7 days into 1 week window_size = 52 labels_size = 1 horizon = 1 # Construct a BigQuery client obj...
blogs/gcp_forecasting/gcp_time_series_forecasting.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
We need to create a date range table in BigQuery so that we can join our data to that to get the correct sequences.
# Call BigQuery and examine in dataframe source_dataset = "nyc_real_estate" source_table_name = "all_sales" query_create_date_range = scalable_time_series.create_date_range( client.project, source_dataset, source_table_name) df = client.query(query_create_date_range + "LIMIT 100").to_dataframe() df.head(5)
blogs/gcp_forecasting/gcp_time_series_forecasting.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Execute query and write to BigQuery table.
job_config = bigquery.QueryJobConfig() # Set the destination table table_name = "start_end_timescale_date_range" table_ref = client.dataset(sink_dataset_name).table(table_name) job_config.destination = table_ref job_config.write_disposition = "WRITE_TRUNCATE" # Start the query, passing in the extra configuration. quer...
blogs/gcp_forecasting/gcp_time_series_forecasting.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Now that we have the date range table created we can create our training dataset for BQML.
# Call BigQuery and examine in dataframe sales_dataset_table = source_dataset + "." + source_table_name query_bq_sub_sequences = scalable_time_series.bq_create_rolling_features_label( client.project, sink_dataset_name, table_name, sales_dataset_table, value_name, downsample_size, window_size, horizon, labels_si...
blogs/gcp_forecasting/gcp_time_series_forecasting.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Create BigQuery dataset Prior to now we've just been reading an existing BigQuery table, now we're going to create our own so so we need some place to put it. In BigQuery parlance, Dataset means a folder for tables. We will take advantage of BigQuery's Python Client to create the dataset.
bq = bigquery.Client(project = PROJECT) dataset = bigquery.Dataset(bq.dataset("bqml_forecasting")) try: bq.create_dataset(dataset) # will fail if dataset already exists print("Dataset created") except: print("Dataset already exists")
blogs/gcp_forecasting/gcp_time_series_forecasting.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Split dataset into a train and eval set.
feature_list = ["price_ago_{time}".format(time=time) for time in range(window_size, 0, -1)] label_list = ["price_ahead_{time}".format(time=time) for time in range(1, labels_size + 1)] select_list = ",".join(feature_list + label_list) select_string = "SELECT {select_list} FROM ({query})".fo...
blogs/gcp_forecasting/gcp_time_series_forecasting.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Create model To create a model 1. Use CREATE MODEL and provide a destination table for resulting model. Alternatively we can use CREATE OR REPLACE MODEL which allows overwriting an existing model. 2. Use OPTIONS to specify the model type (linear_reg or logistic_reg). There are many more options we could specify, such a...
%%with_globals %%bigquery --project $PROJECT CREATE or REPLACE MODEL bqml_forecasting.nyc_real_estate OPTIONS(model_type = "linear_reg", input_label_cols = ["price_ahead_1"]) AS {bqml_train_query}
blogs/gcp_forecasting/gcp_time_series_forecasting.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Get training statistics Because the query uses a CREATE MODEL statement to create a table, you do not see query results. The output is an empty string. To get the training results we use the ML.TRAINING_INFO function. Have a look at Step Three and Four of this tutorial to see a similar example.
%%bigquery --project $PROJECT SELECT {select_list} FROM ML.TRAINING_INFO(MODEL `bqml_forecasting.nyc_real_estate`)
blogs/gcp_forecasting/gcp_time_series_forecasting.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
'eval_loss' is reported as mean squared error, so our RMSE is 291178. Your results may vary.
%%with_globals %%bigquery --project $PROJECT #standardSQL SELECT {select_list} FROM ML.EVALUATE(MODEL `bqml_forecasting.nyc_real_estate`, ({bqml_eval_query}))
blogs/gcp_forecasting/gcp_time_series_forecasting.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Predict To use our model to make predictions, we use ML.PREDICT. Let's, use the nyc_real_estate you trained above to infer median sales price of all of our data. Have a look at Step Five of this tutorial to see another example.
%%with_globals %%bigquery --project $PROJECT df #standardSQL SELECT predicted_price_ahead_1 FROM ML.PREDICT(MODEL `bqml_forecasting.nyc_real_estate`, ({bqml_eval_query}))
blogs/gcp_forecasting/gcp_time_series_forecasting.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
TensorFlow Sequence Model If you might want to use a more custom model, then Keras or TensorFlow may be helpful. Below we are going to create a custom LSTM sequence-to-one model that will read our input data in via CSV files and will train and evaluate. Create temporary BigQuery dataset
# Construct a BigQuery client object. client = bigquery.Client() # Set dataset_id to the ID of the dataset to create. sink_dataset_name = "temp_forecasting_dataset" dataset_id = "{}.{}".format(client.project, sink_dataset_name) # Construct a full Dataset object to send to the API. dataset = bigquery.Dataset.from_stri...
blogs/gcp_forecasting/gcp_time_series_forecasting.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Now that we have the date range table created we can create our training dataset.
# Call BigQuery and examine in dataframe sales_dataset_table = source_dataset + "." + source_table_name downsample_size = 7 query_csv_sub_seqs = scalable_time_series.csv_create_rolling_features_label( client.project, sink_dataset_name, table_name, sales_dataset_table, value_name, downsample_size, window_size, h...
blogs/gcp_forecasting/gcp_time_series_forecasting.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0