markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
|---|---|---|---|---|
Explore the Data
Play around with view_sentence_range to view different parts of the data.
|
view_sentence_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
# source_text.split() with empty string to get all words without the '\n'
# source_text.split('\n') to get all sentences
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
|
project4/files/dlnd_language_translation.ipynb
|
myfunprograms/deep_learning
|
gpl-3.0
|
Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing:
python
target_vocab_to_int['<EOS>']
You can get other word ids using source_vocab_to_int and target_vocab_to_int.
|
def text_to_ids_helper(text, vocab_to_int, appendEOS=False):
sentences = [sentence if not appendEOS else sentence + ' <EOS>' for sentence in text.split('\n')]
ids = [[vocab_to_int[word] for word in sentence.split()] for sentence in sentences]
return ids
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
"""
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
"""
# TODO: Implement Function
source_id_text = text_to_ids_helper(source_text, source_vocab_to_int)
target_id_text = text_to_ids_helper(target_text, target_vocab_to_int, True)
return (source_id_text, target_id_text)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_text_to_ids(text_to_ids)
|
project4/files/dlnd_language_translation.ipynb
|
myfunprograms/deep_learning
|
gpl-3.0
|
Process Decoding Input
Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the begining of each batch.
|
def process_decoding_input(target_data, target_vocab_to_int, batch_size):
"""
Preprocess target data for dencoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
"""
# TODO: Implement Function
ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
id_GO = target_vocab_to_int['<GO>']
decoder_input = tf.concat([tf.fill([batch_size, 1], id_GO), ending], 1)
return decoder_input
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_process_decoding_input(process_decoding_input)
|
project4/files/dlnd_language_translation.ipynb
|
myfunprograms/deep_learning
|
gpl-3.0
|
Encoding
Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().
|
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob):
"""
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:return: RNN state
"""
# TODO: Implement Function
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
_, encoder_state = tf.nn.dynamic_rnn(cell, rnn_inputs, dtype=tf.float32)
return encoder_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_encoding_layer(encoding_layer)
|
project4/files/dlnd_language_translation.ipynb
|
myfunprograms/deep_learning
|
gpl-3.0
|
Decoding - Training
Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.
|
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,
output_fn, keep_prob):
"""
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param sequence_length: Sequence Length
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Train Logits
"""
# TODO: Implement Function
# Per discussion on Slack, dropout could be applied to encoder and train decoder.
# https://nd101.slack.com/archives/C3SEUBC5C/p1492633046150410
drop = tf.contrib.rnn.DropoutWrapper(dec_cell, output_keep_prob=keep_prob)
train_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state)
train_pred, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(cell=drop,
decoder_fn=train_decoder_fn,
inputs=dec_embed_input,
sequence_length=sequence_length,
scope=decoding_scope)
train_logits = output_fn(train_pred)
return train_logits
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer_train(decoding_layer_train)
|
project4/files/dlnd_language_translation.ipynb
|
myfunprograms/deep_learning
|
gpl-3.0
|
Decoding - Inference
Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().
|
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length, vocab_size, decoding_scope, output_fn, keep_prob):
"""
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param maximum_length: The maximum allowed time steps to decode
:param vocab_size: Size of vocabulary
:param decoding_scope: TensorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Inference Logits
"""
# TODO: Implement Function
infer_decoder_fn = \
tf.contrib.seq2seq.simple_decoder_fn_inference(output_fn=output_fn,
encoder_state=encoder_state,
embeddings=dec_embeddings,
start_of_sequence_id=start_of_sequence_id,
end_of_sequence_id=end_of_sequence_id,
maximum_length=maximum_length-1,
num_decoder_symbols=vocab_size)
infer_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(cell=dec_cell,
decoder_fn=infer_decoder_fn,
scope=decoding_scope)
return infer_logits
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer_infer(decoding_layer_infer)
|
project4/files/dlnd_language_translation.ipynb
|
myfunprograms/deep_learning
|
gpl-3.0
|
Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Create RNN cell for decoding using rnn_size and num_layers.
Create the output fuction using lambda to transform it's input, logits, to class logits.
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits.
Note: You'll need to use tf.variable_scope to share variables between training and inference.
|
tf.reset_default_graph()
def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size,
num_layers, target_vocab_to_int, keep_prob):
"""
Create decoding layer
:param dec_embed_input: Decoder embedded input
:param dec_embeddings: Decoder embeddings
:param encoder_state: The encoded state
:param vocab_size: Size of vocabulary
:param sequence_length: Sequence Length
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param keep_prob: Dropout keep probability
:return: Tuple of (Training Logits, Inference Logits)
"""
# TODO: Implement Function
start_of_sequence_id = target_vocab_to_int['<GO>']
end_of_sequence_id = target_vocab_to_int['<EOS>']
with tf.variable_scope('decoding') as decoding_scope:
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
dec_cell = tf.contrib.rnn.MultiRNNCell([lstm] * num_layers)
output_fn = lambda x: tf.contrib.layers.fully_connected(inputs=x,
num_outputs=vocab_size,
activation_fn=None,
scope=decoding_scope)
train_logits = \
decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,
output_fn, keep_prob)
with tf.variable_scope('decoding', reuse=True) as decoding_scope:
infer_logits = \
decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
sequence_length, vocab_size, decoding_scope, output_fn, keep_prob)
return train_logits, infer_logits
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer(decoding_layer)
|
project4/files/dlnd_language_translation.ipynb
|
myfunprograms/deep_learning
|
gpl-3.0
|
Build the Neural Network
Apply the functions you implemented above to:
Apply embedding to the input data for the encoder.
Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob).
Process target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function.
Apply embedding to the target data for the decoder.
Decode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob).
|
def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int):
"""
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param sequence_length: Sequence Length
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training Logits, Inference Logits)
"""
# TODO: Implement Function
enc_embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, enc_embedding_size)
encoder_state = encoding_layer(enc_embed_input, rnn_size, num_layers, keep_prob)
decoder_input = process_decoding_input(target_data, target_vocab_to_int, batch_size)
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size]))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, decoder_input)
train_logits, infer_logits = \
decoding_layer(dec_embed_input, dec_embeddings, encoder_state, target_vocab_size, sequence_length,
rnn_size, num_layers, target_vocab_to_int, keep_prob)
return train_logits, infer_logits
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_seq2seq_model(seq2seq_model)
|
project4/files/dlnd_language_translation.ipynb
|
myfunprograms/deep_learning
|
gpl-3.0
|
Neural Network Training
Hyperparameters
Tune the following parameters:
Set epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set num_layers to the number of layers.
Set encoding_embedding_size to the size of the embedding for the encoder.
Set decoding_embedding_size to the size of the embedding for the decoder.
Set learning_rate to the learning rate.
Set keep_probability to the Dropout keep probability
|
# Number of Epochs
epochs = 5
# Batch Size
batch_size = 512
# RNN Size
rnn_size = 512
# Number of Layers
num_layers = 1
# Embedding Size: according to unique words (Is this assumption valid?)
encoding_embedding_size = 300
decoding_embedding_size = 300
# Learning Rate
learning_rate = 0.001
# Dropout Keep Probability
keep_probability = 0.70
|
project4/files/dlnd_language_translation.ipynb
|
myfunprograms/deep_learning
|
gpl-3.0
|
Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
|
def sentence_to_seq(sentence, vocab_to_int):
"""
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
"""
# TODO: Implement Function
id_UNK = vocab_to_int['<UNK>']
ids = [vocab_to_int.get(word, id_UNK) for word in sentence.lower().split()]
return ids
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_sentence_to_seq(sentence_to_seq)
|
project4/files/dlnd_language_translation.ipynb
|
myfunprograms/deep_learning
|
gpl-3.0
|
Step 2: Calculate the regression coefficients
|
#Reshape the data
g_list = np.reshape([g[0,0] for g in gs], (N,1))
As_list = np.reshape(As, (N,4))
Bs_list = np.reshape(Bs, (N,2))
Ps_list = np.concatenate((np.ones((N,1)), As_list[:,:1], As_list[:,2:], Bs_list), axis=1)
from numpy.linalg import inv
RC = np.dot( np.dot( inv( (np.dot(Ps_list.T, Ps_list)) ), Ps_list.T), g_list)
print("Regression coefficients:", RC)
|
Code/SSRC_LCA_evelynegroen.ipynb
|
evelynegroen/evelynegroen.github.io
|
mit
|
Step 3: calculate the squared standardized regression coefficients
|
import statistics as stats
var_g = stats.variance(g_list[:,0])
var_x = [stats.variance(Ps_list[:,k]) for k in range(1,6)]
SSRC = (var_x/var_g) * (RC[1:6,0]**2)
print("squared standardized regression coefficients:", SSRC)
|
Code/SSRC_LCA_evelynegroen.ipynb
|
evelynegroen/evelynegroen.github.io
|
mit
|
Visualize
|
import matplotlib.pyplot as plt
import matplotlib.pyplot as plt
SSRC_procent = SSRC * 100
x_label=[ 'A(1,1)', 'A(2,1)', 'A(2,2)', 'B(1,1)', 'B(1,2)']
x_pos = range(5)
plt.bar(x_pos, SSRC_procent, align='center')
plt.xticks(x_pos, x_label)
plt.title('Global sensitivity analysis: squared standardized regression coefficients')
plt.ylabel('SSRC (%)')
plt.xlabel('Parameter')
plt.show()
|
Code/SSRC_LCA_evelynegroen.ipynb
|
evelynegroen/evelynegroen.github.io
|
mit
|
B-DNA Double-Stranded Parameters
|
r = 1 # (nm) dsDNA radius
δ = 0.34 # (nm) dsDNA base-pair pitch
n = 10.5 # number of bases per turn
Δφ = 2.31 # (radiants) minor-grove angle between the two strands backbones
φ = 2*pi/n # (radiants) rotation for base-pair
|
DNA model.ipynb
|
tritemio/multispot_paper
|
mit
|
Dye and Linker Geometry
<img src="figures/DNA1.png" style="width:300px;float:left;">
<img src="figures/DNA2.png" style="width:300px;float:left;">
<img src="figures/DNA3.png" style="width:300px;float:left;">
Fraction for the segment $SH_1$ orver $H_1H_2$:
|
def dye_position(i, l=1.6, λ=0.5, ψ=0):
# global structural params: r, δ, n, Δφ
φ = 2*pi/n # (radiants) rotation for base-pair
Dx = r * cos(φ*i) + λ*( r*cos(φ*i + Δφ) - r*cos(φ*i) ) + l*cos(ψ)*cos(φ*i + 0.5*Δφ)
Dy = r * sin(φ*i) + λ*( r*sin(φ*i + Δφ) - r*sin(φ*i) ) + l*cos(ψ)*sin(φ*i + 0.5*Δφ)
Dz = i*δ + l*sin(ψ)
return np.array([Dx, Dy, Dz])
def plot_dye(P, axes=None, **kws):
kws_ = dict(marker='o', ls='-')
kws_.update(kws)
if axes is None:
fig = plt.figure(figsize=(9, 9))
ax_xy = plt.subplot2grid((2,2), (0,0))
ax_xz = plt.subplot2grid((2,2), (1,0))
ax_yz = plt.subplot2grid((2,2), (0,1))
ax_3d = fig.add_subplot(224, projection='3d')
else:
ax_xy, ax_xz, ax_yz, ax_3d = axes
ax_xy.plot(P[0], P[1], **kws_)
ax_xz.plot(P[0], P[2], **kws_)
ax_yz.plot(P[1], P[2], **kws_)
for ax in (ax_xy, ax_xz):
ax.set_xlabel('x (nm)')
ax_xy.set_ylabel('y (nm)')
ax_xz.set_xlabel('x (nm)')
ax_xz.set_ylabel('z (nm)')
ax_yz.set_xlabel('y (nm)')
ax_yz.set_ylabel('z (nm)')
ax_3d.plot(P[0], P[1], P[2], **kws_)
return (ax_xy, ax_xz, ax_yz, ax_3d)
|
DNA model.ipynb
|
tritemio/multispot_paper
|
mit
|
Donor
|
λ = 0.5
ψ = 0
i = 7 # number of bases from reference "base 0"
l = 1.6 # (nm) distance between S and dye position D
dye_position(7)
bp = np.arange(0, 40)
PD = dye_position(bp)
PA = dye_position(bp, l=1.6, λ=0.5, ψ=pi)
axes = plot_dye(PD)
plot_dye(PA, axes, color='r');
bp = np.arange(0, 40, 0.1)
PD = dye_position(bp, l=1.6, λ=0.2, ψ=0)
PA = dye_position(0, l=1.6, λ=0.8, ψ=-pi/2)
R = np.array([np.linalg.norm(px - PA) for px in PD.T])
#R
plt.plot(bp, R);
plt.xlabel('Base-pair')
plt.ylabel('Distance (nm)')
plt.ylim(0);
R0 = 6.7 # nm
def fret(R, R0):
return 1 / (1 + (R/R0)**6)
plt.plot(bp, fret(R, R0));
|
DNA model.ipynb
|
tritemio/multispot_paper
|
mit
|
Alright in this section we're going to continue with the running data set but we're going to dive a bit deeper into ways of analyzing the data including filtering, dropping rows, doing some groupings and that sort of thing.
So what we'll do is read in our csv file.
|
pd.read_csv?
list(range(1,7))
df = pd.read_csv('../data/date_fixed_running_data_with_time.csv', parse_dates=['Date'], usecols=list(range(0,6)))
df.dtypes
df.sort(inplace=True)
df.head()
|
4 - pandas Basics/4-7 pandas DataFrame Summary Statistics, Filtering, Dropping and adding Rows and Columns, Grouping Basics.ipynb
|
mitchshack/data_analysis_with_python_and_pandas
|
apache-2.0
|
Now let's about getting some summary statistics. I would encourage you to try these on your own. We've learned pretty much everything we need to in order to be able to do these on without guidance and as always if you need clarification just ask on the side.
What was the longest run in miles and minutes that I ran?
|
df.Minutes.max()
df.Miles.max()
|
4 - pandas Basics/4-7 pandas DataFrame Summary Statistics, Filtering, Dropping and adding Rows and Columns, Grouping Basics.ipynb
|
mitchshack/data_analysis_with_python_and_pandas
|
apache-2.0
|
What about the shortest in miles and minutes that I ran?
|
df.Minutes.min()
df.Miles.min()
|
4 - pandas Basics/4-7 pandas DataFrame Summary Statistics, Filtering, Dropping and adding Rows and Columns, Grouping Basics.ipynb
|
mitchshack/data_analysis_with_python_and_pandas
|
apache-2.0
|
We forgot to ignore our null values, so how would we do it by ignoring those?
|
df.Miles[df.Miles > 0].min()
|
4 - pandas Basics/4-7 pandas DataFrame Summary Statistics, Filtering, Dropping and adding Rows and Columns, Grouping Basics.ipynb
|
mitchshack/data_analysis_with_python_and_pandas
|
apache-2.0
|
What was the most common running distance I did excluding times when I didn't run at all.
|
df.Miles[df.Miles > 0].value_counts().index[0]
|
4 - pandas Basics/4-7 pandas DataFrame Summary Statistics, Filtering, Dropping and adding Rows and Columns, Grouping Basics.ipynb
|
mitchshack/data_analysis_with_python_and_pandas
|
apache-2.0
|
Plot a graph of the cumulative running distance in this dataset.
|
df.Miles.cumsum().plot()
plt.xlabel("Day Number")
plt.ylabel("Distance")
|
4 - pandas Basics/4-7 pandas DataFrame Summary Statistics, Filtering, Dropping and adding Rows and Columns, Grouping Basics.ipynb
|
mitchshack/data_analysis_with_python_and_pandas
|
apache-2.0
|
Plot a graph of the cumulative running hours in this data set.
|
(df.Minutes.fillna(0).cumsum() / 60).plot()
|
4 - pandas Basics/4-7 pandas DataFrame Summary Statistics, Filtering, Dropping and adding Rows and Columns, Grouping Basics.ipynb
|
mitchshack/data_analysis_with_python_and_pandas
|
apache-2.0
|
Another interesting question we could ask is what days of the week do I commonly go for runs. Am I faster on certain days or does my speed improve over time relative to the distance that I'm running.
So let's get our days of the week
|
df.Date[0].strftime("%A")
|
4 - pandas Basics/4-7 pandas DataFrame Summary Statistics, Filtering, Dropping and adding Rows and Columns, Grouping Basics.ipynb
|
mitchshack/data_analysis_with_python_and_pandas
|
apache-2.0
|
We will do that by mapping our date column to a the time format we need
|
df.Date.map(lambda x: x.strftime("%A")).head()
|
4 - pandas Basics/4-7 pandas DataFrame Summary Statistics, Filtering, Dropping and adding Rows and Columns, Grouping Basics.ipynb
|
mitchshack/data_analysis_with_python_and_pandas
|
apache-2.0
|
then we just set that to a new column.
|
df['Day_of_week'] = df.Date.map(lambda x: x.strftime("%A"))
df.head(10)
|
4 - pandas Basics/4-7 pandas DataFrame Summary Statistics, Filtering, Dropping and adding Rows and Columns, Grouping Basics.ipynb
|
mitchshack/data_analysis_with_python_and_pandas
|
apache-2.0
|
and we can make a bar plot of it, but let's see if we can distinguish anything unique about certain days of the week.
|
df[df.Miles > 0].Day_of_week.value_counts().plot(kind='bar')
|
4 - pandas Basics/4-7 pandas DataFrame Summary Statistics, Filtering, Dropping and adding Rows and Columns, Grouping Basics.ipynb
|
mitchshack/data_analysis_with_python_and_pandas
|
apache-2.0
|
We will do that by creating groups of data frames
We can see that in this sample I run a lot more on the Friday Saturday and Monday. Some interesting patterns. Why don't we try looking at the means and that sort of thing.
But before we get there, at this point, our data frame is getting pretty messy and I think it's worth explaining how to remove columns and add remove rows and columns.
First let's remove the Time column - seeing as we already have minutes and seconds
|
del(df['Time'])
|
4 - pandas Basics/4-7 pandas DataFrame Summary Statistics, Filtering, Dropping and adding Rows and Columns, Grouping Basics.ipynb
|
mitchshack/data_analysis_with_python_and_pandas
|
apache-2.0
|
del will delete it in place
|
df.head()
|
4 - pandas Basics/4-7 pandas DataFrame Summary Statistics, Filtering, Dropping and adding Rows and Columns, Grouping Basics.ipynb
|
mitchshack/data_analysis_with_python_and_pandas
|
apache-2.0
|
Finally we can use drop to drop a column. Now we have to specify the axis( we can also use this to drop rows), now this does not happen in place.
|
df.drop('Seconds',axis=1)
|
4 - pandas Basics/4-7 pandas DataFrame Summary Statistics, Filtering, Dropping and adding Rows and Columns, Grouping Basics.ipynb
|
mitchshack/data_analysis_with_python_and_pandas
|
apache-2.0
|
we can also use drop to drop a specific row by specifying the 0 axis
|
tempdf = pd.DataFrame(np.arange(4).reshape(2,2))
tempdf
tempdf.drop(1,axis=0)
|
4 - pandas Basics/4-7 pandas DataFrame Summary Statistics, Filtering, Dropping and adding Rows and Columns, Grouping Basics.ipynb
|
mitchshack/data_analysis_with_python_and_pandas
|
apache-2.0
|
we Already saw how to create a new column, we can also create a new row using the append method. This takes in a data frame or Series and appends it to the end of the data frame.
|
tempdf.append(pd.Series([4,5]), ignore_index=True)
df.head()
|
4 - pandas Basics/4-7 pandas DataFrame Summary Statistics, Filtering, Dropping and adding Rows and Columns, Grouping Basics.ipynb
|
mitchshack/data_analysis_with_python_and_pandas
|
apache-2.0
|
We can also pop out a column which will remove it from a data frame and return the Series. You'll see that it happens in place.
|
df.pop('Seconds')
df.head()
|
4 - pandas Basics/4-7 pandas DataFrame Summary Statistics, Filtering, Dropping and adding Rows and Columns, Grouping Basics.ipynb
|
mitchshack/data_analysis_with_python_and_pandas
|
apache-2.0
|
Now we've made our dataset a bit more manageable. We've kind of just got the basics of what we need to perform some groupwise analysis.
Now at this point we're going to do some groupings. This is an extremely powerful part of pandas and one that you'll use all the time.
pandas follows the the Split-Apply-Combine style of data analysis.
Many data analysis problems involve the application of a split-apply-combine strategy, where you break up a big problem into manageable pieces, operate on each piece independently and then put all the pieces back together.
Hadley Wickhan from Rice University:
http://www.jstatsoft.org/v40/i01/paper
Since we're going to want to check things in groups. What I'm going to do is try to analyze each day of the week to see if there are any differences in the types of running that I do on those days.
we'll start by grouping or data set on those weekdays. Basically creating a dictionary of the data where the key is the weekday and the value is the dataframe of all those values.
First let's do this the hard way....
|
for dow in df.Day_of_week.unique():
print(dow)
print(df[df.Day_of_week == dow])
break
|
4 - pandas Basics/4-7 pandas DataFrame Summary Statistics, Filtering, Dropping and adding Rows and Columns, Grouping Basics.ipynb
|
mitchshack/data_analysis_with_python_and_pandas
|
apache-2.0
|
This is clearly an ugly way to do this and pandas provides a much more simple way of approaching this problem. by creating a groupby object.
But first I'm going to filter out our zero values because they'll throw off our analysis.
|
df['Miles'] = df.Miles[df.Miles > 0]
dows = df.groupby('Day_of_week')
print(dows)
|
4 - pandas Basics/4-7 pandas DataFrame Summary Statistics, Filtering, Dropping and adding Rows and Columns, Grouping Basics.ipynb
|
mitchshack/data_analysis_with_python_and_pandas
|
apache-2.0
|
We can get the size of each one by using the size command. This basically tells us how many items are in each category.
|
dows.size()
dows.count()
|
4 - pandas Basics/4-7 pandas DataFrame Summary Statistics, Filtering, Dropping and adding Rows and Columns, Grouping Basics.ipynb
|
mitchshack/data_analysis_with_python_and_pandas
|
apache-2.0
|
Now we have our groups and we can start doing groupwise analysis, now what does that mean?
It means we can start answering questions like what is the average speed per weekday or what is the total miles run per weekday?
|
dows.mean()
dows.sum()
|
4 - pandas Basics/4-7 pandas DataFrame Summary Statistics, Filtering, Dropping and adding Rows and Columns, Grouping Basics.ipynb
|
mitchshack/data_analysis_with_python_and_pandas
|
apache-2.0
|
It might be interesting to see the total sum of the amount of runs to try and see any outliers simply because Thursday, Friday, Saturday are close in distances, relatively, but not so much in speed.
We also get access to a lot of summary statistics from here that we can get from the groups.
|
dows.describe()
df.groupby('Day_of_week').mean()
df.groupby('Day_of_week').std()
|
4 - pandas Basics/4-7 pandas DataFrame Summary Statistics, Filtering, Dropping and adding Rows and Columns, Grouping Basics.ipynb
|
mitchshack/data_analysis_with_python_and_pandas
|
apache-2.0
|
iterating through the groups is also very straightforward
|
for name, group in dows:
print(name)
print(group)
|
4 - pandas Basics/4-7 pandas DataFrame Summary Statistics, Filtering, Dropping and adding Rows and Columns, Grouping Basics.ipynb
|
mitchshack/data_analysis_with_python_and_pandas
|
apache-2.0
|
you can get specific groups by using the get_group method.
|
dows.get_group('Friday')
|
4 - pandas Basics/4-7 pandas DataFrame Summary Statistics, Filtering, Dropping and adding Rows and Columns, Grouping Basics.ipynb
|
mitchshack/data_analysis_with_python_and_pandas
|
apache-2.0
|
We can use an aggregation command to perform an operation to get all the counts for each data frame.
|
dows.agg(lambda x: len(x))['Miles']
|
4 - pandas Basics/4-7 pandas DataFrame Summary Statistics, Filtering, Dropping and adding Rows and Columns, Grouping Basics.ipynb
|
mitchshack/data_analysis_with_python_and_pandas
|
apache-2.0
|
another way to do this would be to add a count column to our data frame, then sum up each column
|
df['Count'] = 1
df.head(10)
df.groupby('Day_of_week').sum()
|
4 - pandas Basics/4-7 pandas DataFrame Summary Statistics, Filtering, Dropping and adding Rows and Columns, Grouping Basics.ipynb
|
mitchshack/data_analysis_with_python_and_pandas
|
apache-2.0
|
For the purposes of this notebook we are going to use one of the predefined objective functions that come with GPyOpt. However the key thing to realize is that the function could be anything (e.g., the results of a physical experiment). As long as users are able to externally evaluate the suggested points somehow and provide GPyOpt with results, the library has opinions about the objective function's origin.
|
func = GPyOpt.objective_examples.experiments1d.forrester()
|
manual/GPyOpt_external_objective_evaluation.ipynb
|
SheffieldML/GPyOpt
|
bsd-3-clause
|
Now we define the domain of the function to optimize as usual.
|
domain =[{'name': 'var1', 'type': 'continuous', 'domain': (0,1)}]
|
manual/GPyOpt_external_objective_evaluation.ipynb
|
SheffieldML/GPyOpt
|
bsd-3-clause
|
First we are going to run the optimization loop outside of GPyOpt, and only use GPyOpt to get the next point to evaluate our function.
There are two thing to pay attention to when creating the main optimization object:
* Objective function f is explicitly set to None
* Since we recreate the object anew for each iteration, we need to pass data about all previous iterations to it
|
X_init = np.array([[0.0],[0.5],[1.0]])
Y_init = func.f(X_init)
iter_count = 10
current_iter = 0
X_step = X_init
Y_step = Y_init
while current_iter < iter_count:
bo_step = GPyOpt.methods.BayesianOptimization(f = None, domain = domain, X = X_step, Y = Y_step)
x_next = bo_step.suggest_next_locations()
y_next = func.f(x_next)
X_step = np.vstack((X_step, x_next))
Y_step = np.vstack((Y_step, y_next))
current_iter += 1
|
manual/GPyOpt_external_objective_evaluation.ipynb
|
SheffieldML/GPyOpt
|
bsd-3-clause
|
Let's visualize the results. The size of the marker denotes the order in which the point was evaluated - the bigger the marker the later was the evaluation.
|
x = np.arange(0.0, 1.0, 0.01)
y = func.f(x)
plt.figure()
plt.plot(x, y)
for i, (xs, ys) in enumerate(zip(X_step, Y_step)):
plt.plot(xs, ys, 'rD', markersize=10 + 20 * (i+1)/len(X_step))
|
manual/GPyOpt_external_objective_evaluation.ipynb
|
SheffieldML/GPyOpt
|
bsd-3-clause
|
To compare the results, let's now execute the whole loop with GPyOpt.
|
bo_loop = GPyOpt.methods.BayesianOptimization(f = func.f, domain = domain, X = X_init, Y = Y_init)
bo_loop.run_optimization(max_iter=iter_count)
X_loop = bo_loop.X
Y_loop = bo_loop.Y
|
manual/GPyOpt_external_objective_evaluation.ipynb
|
SheffieldML/GPyOpt
|
bsd-3-clause
|
Now let's print the results of this optimization and compare to the previous external evaluation run. As before, size of the marker corresponds to its evaluation order.
|
plt.figure()
plt.plot(x, y)
for i, (xl, yl) in enumerate(zip(X_loop, Y_loop)):
plt.plot(xl, yl, 'rD', markersize=10 + 20 * (i+1)/len(X_step))
|
manual/GPyOpt_external_objective_evaluation.ipynb
|
SheffieldML/GPyOpt
|
bsd-3-clause
|
To allow even more control over the execution, this API allows to specify points that should be ignored (say the objetive is known to fail in certain locations), as well as points that are already pending evaluation (say in case the user is running several candidates in parallel). Here is how one can provide this information.
|
pending_X = np.array([[0.75]])
ignored_X = np.array([[0.15], [0.85]])
bo = GPyOpt.methods.BayesianOptimization(f = None, domain = domain, X = X_step, Y = Y_step, de_duplication = True)
bo.suggest_next_locations(pending_X = pending_X, ignored_X = ignored_X)
|
manual/GPyOpt_external_objective_evaluation.ipynb
|
SheffieldML/GPyOpt
|
bsd-3-clause
|
HTML
Python objects can declare HTML representations that will be displayed in the Notebook. If you have some HTML you want to display, simply use the HTML class.
|
from IPython.display import HTML
s = """<table>
<tr>
<th>Header 1</th>
<th>Header 2</th>
</tr>
<tr>
<td>row 1, cell 1</td>
<td>row 1, cell 2</td>
</tr>
<tr>
<td>row 2, cell 1</td>
<td>row 2, cell 2</td>
</tr>
</table>"""
h = HTML(s)
display_HTML(h)
|
days/day08/Display.ipynb
|
rvperry/phys202-2015-work
|
mit
|
Define some PV system parameters.
|
surface_tilt = 30
surface_azimuth = 180 # pvlib uses 0=North, 90=East, 180=South, 270=West convention
albedo = 0.2
start = datetime.now() # today's date
end = start + timedelta(days=7) # 7 days from today
timerange = pd.date_range(start, end, tz=tz)
# Define forecast model
fm = GFS()
#fm = NAM()
#fm = NDFD()
#fm = RAP()
#fm = HRRR()
# Retrieve data
forecast_data = fm.get_query_data(latitude, longitude, timerange)
|
docs/tutorials/notebooks/forecast_to_power.ipynb
|
MoonRaker/pvlib-python
|
bsd-3-clause
|
This is a pandas DataFrame object. It has a lot of great properties that are beyond the scope of our tutorials.
|
forecast_data['temperature'].plot()
|
docs/tutorials/notebooks/forecast_to_power.ipynb
|
MoonRaker/pvlib-python
|
bsd-3-clause
|
Plot the GHI data
|
ghi = forecast_data['ghi']
ghi.plot()
plt.ylabel('Irradiance ($W/m^{-2}$)')
|
docs/tutorials/notebooks/forecast_to_power.ipynb
|
MoonRaker/pvlib-python
|
bsd-3-clause
|
Calculate modeling intermediates
Before we can calculate power for all the forecast times, we will need to calculate:
* solar position
* extra terrestrial radiation
* airmass
* angle of incidence
* POA sky and ground diffuse radiation
* cell and module temperatures
The approach here follows that of the pvlib tmy_to_power notebook. You will find more details regarding this approach and the values being calculated in that notebook.
Solar position
Calculate the solar position for all times in the forecast data.
The default solar position algorithm is based on Reda and Andreas (2004). Our implementation is pretty fast, but you can make it even faster if you install numba and use add method='nrel_numba' to the function call below.
|
# retrieve time and location parameters
time = forecast_data.index
a_point = fm.location
solpos = solarposition.get_solarposition(time, a_point)
#solpos.plot()
|
docs/tutorials/notebooks/forecast_to_power.ipynb
|
MoonRaker/pvlib-python
|
bsd-3-clause
|
The funny looking jump in the azimuth is just due to the coarse time sampling in the TMY file.
DNI ET
Calculate extra terrestrial radiation. This is needed for many plane of array diffuse irradiance models.
|
dni_extra = irradiance.extraradiation(fm.time)
dni_extra = pd.Series(dni_extra, index=fm.time)
#dni_extra.plot()
#plt.ylabel('Extra terrestrial radiation ($W/m^{-2}$)')
|
docs/tutorials/notebooks/forecast_to_power.ipynb
|
MoonRaker/pvlib-python
|
bsd-3-clause
|
Cell and module temperature
Calculate pv cell and module temperature
|
temperature = forecast_data['temperature']
wnd_spd = forecast_data['wind_speed']
pvtemps = pvsystem.sapm_celltemp(poa_irrad['poa_global'], wnd_spd, temperature)
pvtemps.plot()
plt.ylabel('Temperature (C)')
|
docs/tutorials/notebooks/forecast_to_power.ipynb
|
MoonRaker/pvlib-python
|
bsd-3-clause
|
DC power using SAPM
Get module data from the web.
|
sandia_modules = pvsystem.retrieve_sam(name='SandiaMod')
|
docs/tutorials/notebooks/forecast_to_power.ipynb
|
MoonRaker/pvlib-python
|
bsd-3-clause
|
Run the SAPM using the parameters we calculated above.
|
sapm_out = pvsystem.sapm(sandia_module, poa_irrad.poa_direct, poa_irrad.poa_diffuse,
pvtemps['temp_cell'], airmass, aoi)
#print(sapm_out.head())
sapm_out[['p_mp']].plot()
plt.ylabel('DC Power (W)')
|
docs/tutorials/notebooks/forecast_to_power.ipynb
|
MoonRaker/pvlib-python
|
bsd-3-clause
|
DC power using single diode
|
cec_modules = pvsystem.retrieve_sam(name='CECMod')
cec_module = cec_modules.Canadian_Solar_CS5P_220M
photocurrent, saturation_current, resistance_series, resistance_shunt, nNsVth = (
pvsystem.calcparams_desoto(poa_irrad.poa_global,
temp_cell=pvtemps['temp_cell'],
alpha_isc=cec_module['alpha_sc'],
module_parameters=cec_module,
EgRef=1.121,
dEgdT=-0.0002677) )
single_diode_out = pvsystem.singlediode(cec_module, photocurrent, saturation_current,
resistance_series, resistance_shunt, nNsVth)
single_diode_out[['p_mp']].plot()
plt.ylabel('DC Power (W)')
|
docs/tutorials/notebooks/forecast_to_power.ipynb
|
MoonRaker/pvlib-python
|
bsd-3-clause
|
Choose a particular inverter
|
sapm_inverter = sapm_inverters['ABB__MICRO_0_25_I_OUTD_US_208_208V__CEC_2014_']
sapm_inverter
p_acs = pd.DataFrame()
p_acs['sapm'] = pvsystem.snlinverter(sapm_inverter, sapm_out.v_mp, sapm_out.p_mp)
p_acs['sd'] = pvsystem.snlinverter(sapm_inverter, single_diode_out.v_mp, single_diode_out.p_mp)
p_acs.plot()
plt.ylabel('AC Power (W)')
#diff = p_acs['sapm'] - p_acs['sd']
#diff.plot()
#plt.ylabel('SAPM - SD Power (W)')
|
docs/tutorials/notebooks/forecast_to_power.ipynb
|
MoonRaker/pvlib-python
|
bsd-3-clause
|
Plot just a few days.
|
p_acs[start:start+timedelta(days=2)].plot()
|
docs/tutorials/notebooks/forecast_to_power.ipynb
|
MoonRaker/pvlib-python
|
bsd-3-clause
|
Some statistics on the AC power
|
p_acs.describe()
p_acs.sum()
|
docs/tutorials/notebooks/forecast_to_power.ipynb
|
MoonRaker/pvlib-python
|
bsd-3-clause
|
Authenticate against the ADH API
ADH documentation
|
#!/usr/bin/python
#
# Copyright 2017 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Utilities used to step through OAuth 2.0 flow.
These are intended to be used for stepping through samples for the Full Circle
Query v2 API.
"""
_APPLICATION_NAME = 'ADH Campaign Overlap'
_CREDENTIALS_FILE = 'fcq-credentials.json' #just to rewrite a new credential file, not for users to provide
_SCOPES = 'https://www.googleapis.com/auth/adsdatahub'
_DISCOVERY_URL_TEMPLATE = 'https://%s/$discovery/rest?version=%s&key=%s'
_FCQ_DISCOVERY_FILE = 'fcq-discovery.json'
_FCQ_SERVICE = 'adsdatahub.googleapis.com'
_FCQ_VERSION = 'v1'
_REDIRECT_URI = 'urn:ietf:wg:oauth:2.0:oob'
_SCOPE = ['https://www.googleapis.com/auth/adsdatahub']
_TOKEN_URI = 'https://accounts.google.com/o/oauth2/token'
MAX_PAGE_SIZE = 50
def _GetCredentialsFromInstalledApplicationFlow():
"""Get new credentials using the installed application flow."""
flow = InstalledAppFlow.from_client_secrets_file(
CLIENT_SECRETS_FILE, scopes=_SCOPE)
flow.redirect_uri = _REDIRECT_URI # Set the redirect URI used for the flow.
auth_url, _ = flow.authorization_url(prompt='consent')
print ('Log into the Google Account you use to access the adsdatahub Query '
'v1 API and go to the following URL:\n%s\n' % auth_url)
print 'After approving the token, enter the verification code (if specified).'
code = raw_input('Code: ')
try:
flow.fetch_token(code=code)
except InvalidGrantError as ex:
print 'Authentication has failed: %s' % ex
sys.exit(1)
credentials = flow.credentials
_SaveCredentials(credentials)
return credentials
def _LoadCredentials():
"""Loads and instantiates Credentials from JSON credentials file."""
with open(_CREDENTIALS_FILE, 'rb') as handler:
stored_creds = json.loads(handler.read())
creds = Credentials(client_id=stored_creds['client_id'],
client_secret=stored_creds['client_secret'],
token=None,
refresh_token=stored_creds['refresh_token'],
token_uri=_TOKEN_URI)
return creds
def _SaveCredentials(creds):
"""Save credentials to JSON file."""
stored_creds = {
'client_id': getattr(creds, '_client_id'),
'client_secret': getattr(creds, '_client_secret'),
'refresh_token': getattr(creds, '_refresh_token')
}
with open(_CREDENTIALS_FILE, 'wb') as handler:
handler.write(json.dumps(stored_creds))
def GetCredentials():
"""Get stored credentials if they exist, otherwise return new credentials.
If no stored credentials are found, new credentials will be produced by
stepping through the Installed Application OAuth 2.0 flow with the specified
client secrets file. The credentials will then be saved for future use.
Returns:
A configured google.oauth2.credentials.Credentials instance.
"""
try:
creds = _LoadCredentials()
creds.refresh(Request())
except IOError:
creds = _GetCredentialsFromInstalledApplicationFlow()
return creds
def GetDiscoveryDocument():
"""Downloads the adsdatahub v1 discovery document.
Downloads the adsdatahub v1 discovery document to fcq-discovery.json
if it is accessible. If the file already exists, it will be overwritten.
Raises:
ValueError: raised if the discovery document is inaccessible for any reason.
"""
credentials = GetCredentials()
discovery_url = _DISCOVERY_URL_TEMPLATE % (
_FCQ_SERVICE, _FCQ_VERSION, DEVELOPER_KEY)
auth_session = AuthorizedSession(credentials)
discovery_response = auth_session.get(discovery_url)
if discovery_response.status_code == 200:
with open(_FCQ_DISCOVERY_FILE, 'wb') as handler:
handler.write(discovery_response.text)
else:
raise ValueError('Unable to retrieve discovery document for api name "%s"'
'and version "%s" via discovery URL: %s'
% _FCQ_SERVICE, _FCQ_VERSION, discovery_url)
def GetService():
"""Builds a configured adsdatahub v1 API service.
Returns:
A googleapiclient.discovery.Resource instance configured for the adsdatahub v1 service.
"""
credentials = GetCredentials()
discovery_url = _DISCOVERY_URL_TEMPLATE % (
_FCQ_SERVICE, _FCQ_VERSION, DEVELOPER_KEY)
service = discovery.build(
'adsdatahub', _FCQ_VERSION, credentials=credentials,
discoveryServiceUrl=discovery_url)
return service
def GetServiceFromDiscoveryDocument():
"""Builds a configured Full Circle Query v2 API service via discovery file.
Returns:
A googleapiclient.discovery.Resource instance configured for the Full Circle
Query API v2 service.
"""
credentials = GetCredentials()
with open(_FCQ_DISCOVERY_FILE, 'rb') as handler:
discovery_doc = handler.read()
service = discovery.build_from_document(
service=discovery_doc, credentials=credentials)
return service
try:
full_circle_query = GetService()
except IOError as ex:
print ('Unable to create ads data hub service - %s' % ex)
print ('Did you specify the client secrets file in samples_util.py?')
sys.exit(1)
try:
# Execute the request.
response = full_circle_query.customers().list().execute()
except HttpError as e:
print (e)
sys.exit(1)
if 'customers' in response:
print ('ADH API Returned {} Ads Data Hub customers for the current user!'.format(len(response['customers'])))
for customer in response['customers']:
print(json.dumps(customer))
else:
print ('No customers found for current user.')
|
pills/Google Ads/[DATA_PILL]_[Google_Ads]_Frequency_and_Audience_Analysis_(ADH).ipynb
|
google/data-pills
|
apache-2.0
|
Frequency Analysis
<b>Purpose:</b> This tool should be used to guide you defining an optimal frequency cap considering the CTR curve. Due to that it is more useful in awareness use cases.
Key notes
For some campaings the user ID will be <b>zeroed</b> (e.g. Googel Data, ITP browsers and YouTube Data), therefore <b>excluded</b> from the analysis. For more information click <a href="https://support.google.com/dcm/answer/9006418" > here</a>;
It will be only included in the analysis campaigns which clicks and impressions were tracked.
Instructions
* First of all: <b>MAKE A COPY</b> =);
* Fulfill the query parameters in the Box 1;
* In the menu above click in Runtime > Run All;
* Authorize your credentials;
* Go to the end of the colab and your figures will be ready;
* After defining what should be the optimal frequency cap fill it in the Box 2 and press play.
Step 1 - Instructions - Defining parameters to find the optimal frequency
<b>max_freq:</b> Stands for the amount of frequency you want to plot the graphics (e.g. if you put 50, you will look for impressions that was shown up to 50 times for users);
<b>id_type:</b> How do you want to filter your data (if you don't want to filter leave it blank);
<b>IDs:</b> Accordingly to the id_type chosen before, fill in this field following this patterns: 'id-1111', 'id-2222', ...
|
#@title Define ADH configuration parameters
customer_id = 000000001 #@param
query_name = 'test1' #@param
big_query_project = 'adh-scratch' #@param Destination Project ID
big_query_dataset = 'test' #@param Destination Dataset
big_query_destination_table = 'freqanalysis_test' #@param Destination Table
big_query_destination_table_affinity = 'freqanalysis_test_affinity' #@param Affinity Destination Table
big_query_destination_table_inmarket = 'freqanalysis_test_inmarket' #@param In-market Destination Table
big_query_destination_table_age_gender = 'freqanalysis_test_age_gender' #@param Age/Gender Destination Table
start_date = '2019-12-01' #@param {type:"date", allow-input: true}
end_date = '2019-12-31' #@param {type:"date", allow-input: true}
max_freq = 100 #@param {type:"integer", allow-input: true}
cpm = 12#@param {type:"number", allow-input: true}
id_type = "campaign_id" #@param ["", "advertiser_id", "campaign_id", "placement_id", "ad_id"] {type: "string", allow-input: false}
IDs = "12345678" #@param {type: "string", allow-input: true}
|
pills/Google Ads/[DATA_PILL]_[Google_Ads]_Frequency_and_Audience_Analysis_(ADH).ipynb
|
google/data-pills
|
apache-2.0
|
Step 2 - Create a function for the final calculations
From DT data Calculate metrics using pandas
Pass through the pandas dataframe when you call this function
|
def df_calc_fields(df):
df['ctr'] = df.clicks / df.impressions
df['cpc'] = df.cost / df.clicks
df['cumulative_clicks'] = df.clicks.cumsum()
df['cumulative_impressions'] = df.impressions.cumsum()
df['cumulative_reach'] = df.reach.cumsum()
df['cumulative_cost'] = df.cost.cumsum()
df['coverage_clicks'] = df.cumulative_clicks / df.clicks.sum()
df['coverage_impressions'] = df.cumulative_impressions / df.impressions.sum()
df['coverage_reach'] = df.cumulative_reach / df.reach.sum()
return df
|
pills/Google Ads/[DATA_PILL]_[Google_Ads]_Frequency_and_Audience_Analysis_(ADH).ipynb
|
google/data-pills
|
apache-2.0
|
Step 3 - Build the query
Step 3a - Query for reach & frequency
Set up the vairables
|
# Build the query
dc = {}
if (IDs == ""):
dc['ID_filters'] = ""
else:
dc['id_type'] = id_type
dc['IDs'] = IDs
dc['ID_filters'] = '''AND {id_type} IN ({IDs})'''.format(**dc)
#create global query list
global_query_name = []
|
pills/Google Ads/[DATA_PILL]_[Google_Ads]_Frequency_and_Audience_Analysis_(ADH).ipynb
|
google/data-pills
|
apache-2.0
|
Part 1 - Find all impressions from the impression table:
* Select all user IDs from the impression table
* Select the event_time
* Mark the interaction type as 'imp' for all of these rows
* Filter for the dates set in Step 1 using the partition files to reduce bigQuery costs by only searching in files within a 2 day interval of the set date range
* Filter out any user IDs that are 0
* If specific ID filters were applied in Step 1 filter the data for those IDs
|
q1 = """
WITH
imp_u_clicks AS (
SELECT
user_id,
query_id.time_usec AS interaction_time,
0 AS cost,
'imp' AS interaction_type
FROM
adh.google_ads_impressions
WHERE
user_id != '0' {ID_filters}
"""
|
pills/Google Ads/[DATA_PILL]_[Google_Ads]_Frequency_and_Audience_Analysis_(ADH).ipynb
|
google/data-pills
|
apache-2.0
|
Part 2 - Find all clicks from the clicks table:
Select all User IDs from the click table
Select the event_time
Mark the interaction type as 'click' for all of these rows
Filter for the dates set in Step 1 using the partition files to reduce BigQuery costs by only searching in files within a 2 day interval of the set date range
If specific ID filters were applied in Step 2 filter the data for those IDs
Use a union to create a single table with both impressions and clicks
|
q2 = """
UNION ALL (
SELECT
user_id,
click_id.time_usec AS interaction_time,
advertiser_click_cost_usd AS cost,
'click' AS interaction_type
FROM
adh.google_ads_clicks
WHERE
user_id != '0' AND impression_data.{id_type} IN ({IDs}) ) ),
"""
|
pills/Google Ads/[DATA_PILL]_[Google_Ads]_Frequency_and_Audience_Analysis_(ADH).ipynb
|
google/data-pills
|
apache-2.0
|
output example:
<table>
<tr>
<th>USER_ID</th>
<th>interaction_time</th>
<th>interaction_type</th>
</tr>
<tr>
<td>001</td>
<td>timestamp</td>
<td>impression</td>
</tr>
<tr>
<td>001</td>
<td>timestamp</td>
<td>impression</td>
</tr>
<tr>
<td>001</td>
<td>timestamp</td>
<td>click</td>
</tr>
<tr>
<td>002</td>
<td>timestamp</td>
<td>impression</td>
</tr>
</tr>
<tr>
<td>002</td>
<td>timestamp</td>
<td>click</td>
</tr>
</tr>
<tr>
<td>003</td>
<td>timestamp</td>
<td>impression</td>
</tr>
<tr>
<td>001</td>
<td>timestamp</td>
<td>impression</td>
</tr>
</table>
Part 3 - Calculate impressions and clicks per user:
For each user, calculate the number of impressions and clicks using the table created in Part 1 and 2
|
q3 = """
user_level_data AS (
SELECT
user_id,
SUM(IF(interaction_type = 'imp',
1,
0)) AS impressions,
SUM(IF(interaction_type = 'click',
1,
0)) AS clicks,
SUM(cost) AS cost
FROM
imp_u_clicks
GROUP BY
user_id)
"""
|
pills/Google Ads/[DATA_PILL]_[Google_Ads]_Frequency_and_Audience_Analysis_(ADH).ipynb
|
google/data-pills
|
apache-2.0
|
output example:
<table>
<tr>
<th>USER_ID</th>
<th>impressions</th>
<th>clicks</th>
</tr>
<tr>
<td>001</td>
<td>3</td>
<td>1</td>
</tr>
<tr>
<td>002</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>003</td>
<td>1</td>
<td>0</td>
</tr>
</table>
Part 4 - Calculate metrics per frequency:
Use the table created in Part 3 with metrics at user level to calculate metrics per each frequency
Frequency: The number of impressions served to each user
Clicks: The sum of clicks that occured at each frequency
Impressions: The sum of all impressions that occured at each frequency
Reach: The total number of unique users (the count of all user ids)
Group by Frequency
|
q4 = """
SELECT
impressions AS frequency,
SUM(clicks) AS clicks,
SUM(impressions) AS impressions,
COUNT(*) AS reach,
SUM(cost) AS cost
FROM
user_level_data
GROUP BY
1
ORDER BY
frequency ASC
"""
|
pills/Google Ads/[DATA_PILL]_[Google_Ads]_Frequency_and_Audience_Analysis_(ADH).ipynb
|
google/data-pills
|
apache-2.0
|
Create the query required for ADH
* When working with ADH the standard BigQuery query needs to be adapted to run in ADH
* This can be done bia the API
|
import datetime
try:
full_circle_query = GetService()
except IOError, ex:
print 'Unable to create ads data hub service - %s' % ex
print 'Did you specify the client secrets file?'
sys.exit(1)
d = datetime.datetime.today()
query_create_body = {
'name': query_name + '_' + d.strftime('%d-%m-%Y') + '_freq',
'title': query_name + '_' + d.strftime('%d-%m-%Y') + '_freq',
'queryText': query_text
}
try:
# Execute the request.
new_query = full_circle_query.customers().analysisQueries().create(body=query_create_body, parent='customers/' + str(customer_id)).execute()
global_query_name.append(new_query["name"])
except HttpError as e:
print e
sys.exit(1)
print 'New query %s created for customer ID "%s":' % (query_name, customer_id)
print(json.dumps(new_query))
|
pills/Google Ads/[DATA_PILL]_[Google_Ads]_Frequency_and_Audience_Analysis_(ADH).ipynb
|
google/data-pills
|
apache-2.0
|
Step 3b - Query for demographics & interest
|
qb1 = """
SELECT
a.affinity_category,
COUNT(imp.query_id.time_usec) AS impression,
COUNT(clk.click_id.time_usec) AS clicks,
COUNT(DISTINCT imp.user_id) AS reach,
COUNT(imp.query_id.time_usec)/COUNT(DISTINCT imp.user_id) AS frequency,
COUNT(clk.click_id.time_usec) /COUNT(imp.query_id.time_usec) AS ctr
FROM
adh.google_ads_impressions imp, UNNEST(affinity) aff
LEFT JOIN adh.google_ads_clicks clk USING (query_id)
JOIN adh.affinity a on aff = a.affinity_id
WHERE a.affinity_category != '' {ID_filters}
GROUP BY 1
ORDER BY 3 DESC
"""
|
pills/Google Ads/[DATA_PILL]_[Google_Ads]_Frequency_and_Audience_Analysis_(ADH).ipynb
|
google/data-pills
|
apache-2.0
|
output example:
<table>
<tr>
<th>affinity</th>
<th>impression</th>
<th>click</th>
<th>reach</th>
<th>frequency</th>
<th>ctr</th>
</tr>
<tr>
<td>Gamers</td>
<td>10</td>
<td>2</td>
<td>5</td>
<td>2</td>
<td>0.2</td>
</tr>
<tr>
<td>Music Lovers</td>
<td>20</td>
<td>10</td>
<td>4</td>
<td>5</td>
<td>0.5</td>
</tr>
<tr>
<td>Sports Fans</td>
<td>40</td>
<td>10</td>
<td>2</td>
<td>20</td>
<td>0.25</td>
</tr>
</table>
|
qb2 = """
SELECT
a.in_market_category,
COUNT(imp.query_id.time_usec) AS impression,
COUNT(clk.click_id.time_usec) AS clicks,
COUNT(DISTINCT imp.user_id) AS reach,
COUNT(imp.query_id.time_usec)/COUNT(DISTINCT imp.user_id) AS frequency,
COUNT(clk.click_id.time_usec) /COUNT(imp.query_id.time_usec) AS ctr
FROM
adh.google_ads_impressions imp, UNNEST(in_market) aff
LEFT JOIN adh.google_ads_clicks clk USING (query_id)
JOIN adh.in_market a on aff = a.in_market_id
WHERE a.in_market_category != '' {ID_filters}
GROUP BY 1
ORDER BY 3 DESC
"""
|
pills/Google Ads/[DATA_PILL]_[Google_Ads]_Frequency_and_Audience_Analysis_(ADH).ipynb
|
google/data-pills
|
apache-2.0
|
output example:
<table>
<tr>
<th>affinity</th>
<th>impression</th>
<th>click</th>
<th>reach</th>
<th>frequency</th>
<th>ctr</th>
</tr>
<tr>
<td>SUVs</td>
<td>10</td>
<td>2</td>
<td>5</td>
<td>2</td>
<td>0.2</td>
</tr>
<tr>
<td>Home & Garden</td>
<td>20</td>
<td>10</td>
<td>4</td>
<td>5</td>
<td>0.5</td>
</tr>
<tr>
<td>Education</td>
<td>40</td>
<td>10</td>
<td>2</td>
<td>20</td>
<td>0.25</td>
</tr>
</table>
|
qb3 = """
SELECT
gen.gender_name,
age.age_group_name,
COUNT(imp.query_id.time_usec) AS impression,
COUNT(clk.click_id.time_usec) AS clicks,
COUNT(DISTINCT imp.user_id) AS reach,
COUNT(imp.query_id.time_usec)/COUNT(DISTINCT imp.user_id) AS frequency,
COUNT(clk.click_id.time_usec) /COUNT(imp.query_id.time_usec) AS ctr
FROM
adh.google_ads_impressions imp
LEFT JOIN adh.google_ads_clicks clk USING (query_id)
LEFT JOIN adh.age_group age ON imp.demographics.age_group = age_group_id
LEFT JOIN adh.gender gen ON imp.demographics.gender = gender_id
WHERE {id_type} IN ({IDs})
GROUP BY 1,2
ORDER BY 1,2 DESC
"""
import datetime
try:
full_circle_query = GetService()
except IOError, ex:
print 'Unable to create ads data hub service - %s' % ex
print 'Did you specify the client secrets file?'
sys.exit(1)
# create request body method
d = datetime.datetime.today()
demoQuery = [qb1, qb2, qb3]
def createRequestBody(arg):
data = {}
data['name'] = query_name + '_' + d.strftime('%d-%m-%Y') + '_demo_' +str(arg)
data['title'] = query_name + '_' + d.strftime('%d-%m-%Y') + '_demo_' + str(arg)
data['queryText'] = demoQuery[arg].format(**dc)
return data
#create multiple query
for i in range(len(demoQuery)):
try:
# Execute the request.
queryBody = createRequestBody(i)
new_query = full_circle_query.customers().analysisQueries().create(body=queryBody, parent='customers/' + str(customer_id)).execute()
global_query_name.append(new_query["name"])
except HttpError as e:
print e
sys.exit(1)
print 'New query %s created for customer ID "%s":' % (new_query_name, customer_id)
|
pills/Google Ads/[DATA_PILL]_[Google_Ads]_Frequency_and_Audience_Analysis_(ADH).ipynb
|
google/data-pills
|
apache-2.0
|
output example:
<table>
<tr>
<th>affinity</th>
<th>impression</th>
<th>click</th>
<th>reach</th>
<th>frequency</th>
<th>ctr</th>
</tr>
<tr>
<td>SUVs</td>
<td>10</td>
<td>2</td>
<td>5</td>
<td>2</td>
<td>0.2</td>
</tr>
<tr>
<td>Home & Garden</td>
<td>20</td>
<td>10</td>
<td>4</td>
<td>5</td>
<td>0.5</td>
</tr>
<tr>
<td>Education</td>
<td>40</td>
<td>10</td>
<td>2</td>
<td>20</td>
<td>0.25</td>
</tr>
</table>
Check your query exists
https://adsdatahub.google.com/#/queries
Find your query in the my queries tab
Check and ensure your query is valid (there will be a green tick in the top right corner)
If your query is not valid hover over the red exclamation mark to see issues that need to be resolved
Step 4 - Run the query
Start the query
Pass the query in to ADH using the full_circle_query method set at the start
Pass in the dates, the destination table name in BigQuery and the customer ID
|
destination_table_full_path = big_query_project + '.' + big_query_dataset + '.' + big_query_destination_table
destination_table_full_path_affinity = big_query_project + '.' + big_query_dataset + '.' + big_query_destination_table_affinity
destination_table_full_path_inmarket = big_query_project + '.' + big_query_dataset + '.' + big_query_destination_table_inmarket
destination_table_full_path_age_gender = big_query_project + '.' + big_query_dataset + '.' + big_query_destination_table_age_gender
CUSTOMER_ID = customer_id
QUERY_NAME = query_name
DEST_TABLES = [destination_table_full_path, destination_table_full_path_affinity, destination_table_full_path_inmarket, destination_table_full_path_age_gender]
#Dates
format_str = '%Y-%m-%d' # The format
start_date_obj = datetime.datetime.strptime(start_date, format_str)
end_date_obj = datetime.datetime.strptime(end_date, format_str)
START_DATE = {
"year": start_date_obj.year,
"month": start_date_obj.month,
"day": start_date_obj.day
}
END_DATE = {
"year": end_date_obj.year,
"month": end_date_obj.month,
"day": end_date_obj.day
}
try:
full_circle_query = GetService()
except IOError, ex:
print('Unable to create ads data hub service - %s' % ex)
print('Did you specify the client secrets file?')
sys.exit(1)
query_start_body = {
'spec': {
'startDate': START_DATE,
'endDate': END_DATE
},
'destTable': '',
'customerId': CUSTOMER_ID
}
#run all queries
for i in range(len(global_query_name)):
try:
# Execute the request.
query_start_body['destTable'] = DEST_TABLES[i]
operation = full_circle_query.customers().analysisQueries().start(body=query_start_body, name=global_query_name[i].encode("ascii")).execute()
except HttpError as e:
print(e)
sys.exit(1)
print('Running query with name "%s" via the following operation:' % query_name)
print(json.dumps(operation))
|
pills/Google Ads/[DATA_PILL]_[Google_Ads]_Frequency_and_Audience_Analysis_(ADH).ipynb
|
google/data-pills
|
apache-2.0
|
Step 5 - Retrieve the table from BigQuery
Retrieve the results from BigQuery
Check to make sure the query has finished running and is saved in the new BigQuery Table
When it is done we cane retrieve it
|
import time
statusDone = False
while statusDone is False:
print("waiting for the job to complete...")
updatedOperation = full_circle_query.operations().get(name=operation['name']).execute()
if updatedOperation.has_key('done') and updatedOperation['done'] == True:
statusDone = True
time.sleep(5)
print("Job completed... Getting results")
#run bigQuery query
dc = [big_query_dataset + '.' + big_query_destination_table,
big_query_dataset + '.' + big_query_destination_table_affinity,
big_query_dataset + '.' + big_query_destination_table_inmarket,
big_query_dataset + '.' + big_query_destination_table_age_gender]
qs = ['select * from {}'.format(q) for q in dc]
|
pills/Google Ads/[DATA_PILL]_[Google_Ads]_Frequency_and_Audience_Analysis_(ADH).ipynb
|
google/data-pills
|
apache-2.0
|
We are using the pandas library to run the query.
We pass in the query (q), the project id and set the SQL language to 'standard' (as opposed to legacy SQL)
|
# Run query as save as a table (also known as dataframe)
df = pd.io.gbq.read_gbq(q1, project_id=big_query_project, dialect='standard', reauth=True)
dfs = [pd.io.gbq.read_gbq(q, project_id=big_query_project, dialect='standard', reauth=True) for q in qs]
for i in range(4):
print(dfs[i])
|
pills/Google Ads/[DATA_PILL]_[Google_Ads]_Frequency_and_Audience_Analysis_(ADH).ipynb
|
google/data-pills
|
apache-2.0
|
Save the output as a CSV
|
# Save the original dataframe as a csv file in case you need to recover the original data
dfs[0].to_csv('data_reach_freq.csv', index=False)
dfs[1].to_csv('data_affinity.csv', index=False)
dfs[2].to_csv('data_inmarket.csv', index=False)
dfs[3].to_csv('data_age_gender.csv', index=False)
|
pills/Google Ads/[DATA_PILL]_[Google_Ads]_Frequency_and_Audience_Analysis_(ADH).ipynb
|
google/data-pills
|
apache-2.0
|
Setup up the dataframe and preview to check the data.
|
#prepare reach & frequency data
df = pd.read_csv('data_reach_freq.csv')
print(df.head())
#prepare affinity data
df3 = pd.read_csv('data_affinity.csv')
print(df3.head())
#prepare in_market data
df4 = pd.read_csv('data_inmarket.csv')
print(df4.head())
#prepare age & gender data
df5 = pd.read_csv('data_age_gender.csv')
print(df5.head())
|
pills/Google Ads/[DATA_PILL]_[Google_Ads]_Frequency_and_Audience_Analysis_(ADH).ipynb
|
google/data-pills
|
apache-2.0
|
Step 6 - Set up the data and all the charts that will be plotted
6.1 Transform data
Use the calculation function created to calculate all the values based off your data
|
df = df[:max_freq+1] # Reduces the dataframe to have the size you set as the maximum frequency (max_freq)
df = df_calc_fields(df)
df2 = df.copy() # Copy the dataframe you calculated the fields in case you need to recover it
graphs = [] # Variable to save all graphics
df.head()
|
pills/Google Ads/[DATA_PILL]_[Google_Ads]_Frequency_and_Audience_Analysis_(ADH).ipynb
|
google/data-pills
|
apache-2.0
|
Analysis 3: Determine impressions outside optimal frequency
Step 1: Define parameter to be the Optimal Frequency
This parameter below will guide the analysis of media loss talking about impressions. We will calculate the percentage of impressions that are out of the number you set as the optimal frequency.
|
#@title 1.1 - Optimal Frequency
optimal_freq = 3#@param {type:"integer", allow-input: true}
slider_value = 1 #@param test
if optimal_freq > len(df2):
raise Exception('Your optimal frequency is higher than the maxmium frequency in your campaign please make sure it is lower than {}'.format(len(df2)))
|
pills/Google Ads/[DATA_PILL]_[Google_Ads]_Frequency_and_Audience_Analysis_(ADH).ipynb
|
google/data-pills
|
apache-2.0
|
Output: Calculate impression loss
|
from __future__ import division
df2 = df_calc_fields(df2)
df_opt, df_not_opt = df[:optimal_freq], df[optimal_freq:]
total_impressions = list(df2.cumulative_impressions)[-1]
total_imp_not_opt = list(df_not_opt.cumulative_impressions)[-1] - list(df_opt.cumulative_impressions)[-1]
imp_not_opt_ratio = total_imp_not_opt / total_impressions
total_clicks = list(df2.cumulative_clicks)[-1]
total_clicks_not_opt = list(df_not_opt.cumulative_clicks)[-1] - list(df_opt.cumulative_clicks)[-1]
clicks_within_opt_ratio = 1-(total_clicks_not_opt / total_clicks)
print("{:.1f}% of your total impressions are out of the optimal frequency.".format(imp_not_opt_ratio*100))
print("{:,} of your impressions are out of the optimal frequency".format(total_imp_not_opt))
print("At a CPM of {} - preventing these would result in a cost saving of {:,}".format(cpm, cpm*total_imp_not_opt))
print("")
print("If you limited frequency to {}, you would still achieve {:.1f}% of your clicks").format(optimal_freq, clicks_within_opt_ratio*100)
|
pills/Google Ads/[DATA_PILL]_[Google_Ads]_Frequency_and_Audience_Analysis_(ADH).ipynb
|
google/data-pills
|
apache-2.0
|
Analysis 4: Determine which affinity work best
Step 1: Set up charts
|
# calculate the color scale
diff_affinity = df3.ctr.max(0) - df3.ctr.min(0)
df3['colorscale'] = (df3.ctr - df3.ctr.min(0))/ diff_affinity *40 + 130
diff_in_market = df4.ctr.max(0) - df4.ctr.min(0)
df4['colorscale'] = (df4.ctr - df4.ctr.min(0))/ diff_in_market *40 + 130
diff_in_market = df5.ctr.max(0) - df5.ctr.min(0)
df5['colorscale'] = (df5.ctr - df5.ctr.min(0))/ diff_in_market *40 + 130
#prepare the graph, rerun if no graph shown
#affinity
graphs3 = []
size1=df3.reach
affinity = dict(type='scatter',
x= df3.frequency,
y= df3.ctr,
text= df3.affinity_category,
name='Cummulative % Reach',
marker=dict(color= df3.colorscale,
size=size1,
sizemode='area',
sizeref=2.*max(size1)/(40.**2),
sizemin=4,
showscale=True),
mode='markers'
)
layout = dict(autosize= True,
title='affinity bubble chart',
xaxis=dict(autorange=True,
title= 'frequency'
),
yaxis=dict(autorange=True,
title= 'CTR'
)
)
fig = dict(data=[affinity], layout=layout)
graphs3.append(fig)
#in market
size2=df5.reach
in_market = dict(type='scatter',
x= df4.frequency,
y= df4.ctr,
text= df4.in_market_category,
name='Cummulative % Reach',
marker=dict(color= df4.colorscale,
size=size2,
sizemode='area',
sizeref=2.*max(size2)/(40.**2),
sizemin=4,
showscale=True),
mode='markers'
)
layout = dict(autosize= True,
title='in_market bubble chart',
xaxis=dict(autorange=True,
title= 'frequency'
),
yaxis=dict(autorange=True,
title= 'CTR'
)
)
fig = dict(data=[in_market], layout=layout)
graphs3.append(fig)
#age & demo
male = dict(type='scatter',
x= df5.frequency[df5.gender_name == 'male'],
y= df5.ctr[df5.gender_name == 'male'],
text= df5.age_group_name[df5.gender_name == 'male'],
name='male',
marker=dict(size=df5.reach[df5.gender_name == 'male'],
sizemode='area',
sizeref=2.*max(df5.reach[df5.gender_name == 'male'])/(40.**2),
sizemin=4),
mode='markers'
)
female = dict(type='scatter',
x= df5.frequency[df5.gender_name == 'female'],
y= df5.ctr[df5.gender_name == 'female'],
text= df5.age_group_name[df5.gender_name == 'female'],
name='female',
marker=dict(size=df5.reach[df5.gender_name == 'female'],
sizemode='area',
sizeref=2.*max(df5.reach[df5.gender_name == 'female'])/(40.**2),
sizemin=4
),
mode='markers'
)
unknown = dict(type='scatter',
x= df5.frequency[df5.gender_name == 'unknown'],
y= df5.ctr[df5.gender_name == 'unknown'],
text= df5.age_group_name[df5.gender_name == 'unknown'],
name='unknown',
marker=dict(size=df5.reach[df5.gender_name == 'unknown'],
sizemode='area',
sizeref=2.*max(df5.reach[df5.gender_name == 'unknown'])/(40.**2),
sizemin=4),
mode='markers'
)
layout = dict(autosize= True,
title='age & gender bubble chart',
xaxis=dict(autorange=True,
title= 'frequency'
),
yaxis=dict(autorange=True,
title= 'CTR'
)
)
fig = dict(data=[male,female,unknown], layout=layout)
graphs3.append(fig)
|
pills/Google Ads/[DATA_PILL]_[Google_Ads]_Frequency_and_Audience_Analysis_(ADH).ipynb
|
google/data-pills
|
apache-2.0
|
Ouput: Visualise the data
Affinity bubble chart
What affinity is underreaching?
What affinity should I switch the cost to?
|
enable_plotly_in_cell()
iplot(graphs3[0])
|
pills/Google Ads/[DATA_PILL]_[Google_Ads]_Frequency_and_Audience_Analysis_(ADH).ipynb
|
google/data-pills
|
apache-2.0
|
check whether big bubble is in the bottom left corner, those are the affinities with lowest efficency
bubble in the upper left corner are the affinities with high potentials
you should keep your investment in the upper right hand corner bubble
shift budget from bottom right hand corner to other area
In market bubble chart
What in market segment is underreaching?
What in market segment should I switch the cost to?
|
enable_plotly_in_cell()
iplot(graphs3[1])
|
pills/Google Ads/[DATA_PILL]_[Google_Ads]_Frequency_and_Audience_Analysis_(ADH).ipynb
|
google/data-pills
|
apache-2.0
|
check whether big bubble is in the bottom left corner, those are the in market segment with lowest efficency
bubble in the upper left corner are the in market segment with high potentials
you should keep your investment in the upper right hand corner bubble
shift budget from bottom right hand corner to other area
age & gender bubble chart
which gender is underreaching?
Which age group should I switch the cost to?
|
enable_plotly_in_cell()
iplot(graphs3[2])
|
pills/Google Ads/[DATA_PILL]_[Google_Ads]_Frequency_and_Audience_Analysis_(ADH).ipynb
|
google/data-pills
|
apache-2.0
|
加载模型
HanLP的工作流程是先加载模型,模型的标示符存储在hanlp.pretrained这个包中,按照NLP任务归类。
|
import hanlp
hanlp.pretrained.sts.ALL # 语种见名称最后一个字段或相应语料库
|
plugins/hanlp_demo/hanlp_demo/zh/sts_stl.ipynb
|
hankcs/HanLP
|
apache-2.0
|
调用hanlp.load进行加载,模型会自动下载到本地缓存:
|
sts = hanlp.load(hanlp.pretrained.sts.STS_ELECTRA_BASE_ZH)
|
plugins/hanlp_demo/hanlp_demo/zh/sts_stl.ipynb
|
hankcs/HanLP
|
apache-2.0
|
语义文本相似度
输入两段短文本组成的二元组列表,执行语义文本相似度:
|
sts([
('看图猜一电影名', '看图猜电影'),
('无线路由器怎么无线上网', '无线上网卡和无线路由器怎么用'),
('北京到上海的动车票', '上海到北京的动车票'),
])
|
plugins/hanlp_demo/hanlp_demo/zh/sts_stl.ipynb
|
hankcs/HanLP
|
apache-2.0
|
Sometimes you need the index of an element and the element itself while you iterate through a list. This can be achived with the enumerate function.
|
for i, e in enumerate(greek):
print(e + " is at index: " + str(i))
|
Code/Introduction To Python/4 - Conditionals and Looping.ipynb
|
vbarua/PythonWorkshop
|
mit
|
The above is taking advantage of tuple decomposition, because enumerate generates a list of tuples. Let's look at this more explicitly.
|
list_of_tuples = [(1,2,3),(4,5,6), (7,8,9)]
for (a, b, c) in list_of_tuples:
print(a + b + c)
|
Code/Introduction To Python/4 - Conditionals and Looping.ipynb
|
vbarua/PythonWorkshop
|
mit
|
If you have two or more lists you want to iterate over together you can use the zip function.
|
for e1, e2 in zip(greek, greek):
print("Double Greek: " + e1 + e2)
|
Code/Introduction To Python/4 - Conditionals and Looping.ipynb
|
vbarua/PythonWorkshop
|
mit
|
You can also iterate over tuples
|
for i in (1,2,3):
print(i)
|
Code/Introduction To Python/4 - Conditionals and Looping.ipynb
|
vbarua/PythonWorkshop
|
mit
|
and dictionaries in various ways
|
elements = {
"H": "Hydrogen",
"He": "Helium",
"Li": "Lithium",
}
for key in elements: # Over the keys
print(key)
for key, val in elements.items(): # Over the keys and values
print(key + ": " + val)
for val in elements.values():
print(val)
|
Code/Introduction To Python/4 - Conditionals and Looping.ipynb
|
vbarua/PythonWorkshop
|
mit
|
Conditionals
Conditionals are a way of having certain actions occur only on specific conditions.
|
x = 5
if x % 2 == 0: # Checks if x is even
print(str(x) + " is even!!!")
if x % 2 == 1: # Checks if x is odd
print(str(x) + " is odd!!!")
|
Code/Introduction To Python/4 - Conditionals and Looping.ipynb
|
vbarua/PythonWorkshop
|
mit
|
In the above example the second condition is somewhat redundant because if x is not even it is odd. There's a better way of expressing this.
|
if x % 2 == 0: # Checks if x is even
print(str(x) + " is even!!!")
else: # x is odd
print(str(x) + " is odd!!!")
|
Code/Introduction To Python/4 - Conditionals and Looping.ipynb
|
vbarua/PythonWorkshop
|
mit
|
You can have more than two conditions.
|
if x % 4 == 0:
print("x divisible by 4")
elif x % 3 == 0 and x % 5:
print("x divisible by 3 and 5")
elif x % 1 == 0:
print("x divisible by 1")
else:
print("I give up")
|
Code/Introduction To Python/4 - Conditionals and Looping.ipynb
|
vbarua/PythonWorkshop
|
mit
|
Note how only the first conditional branch that matches gets executed. The else conditions is a catch all condition that would have executed if none of the other branches had been chosen.
Bonus: Loop Comprehensions
You can use the for keyword to generate new lists with data using list comprehensions.
|
r1 = []
for i in range(10):
r1.append(i**2)
# Equivalent to the loop above.
r2 = [i**2 for i in range(10)]
print(r1)
print(r2)
r1 = []
for i in range(30):
if (i%2 == 0 and i%3 == 0):
r1.append(i)
# Equivalent to the loop above.
r2 = [i for i in range(30) if i%2==0 and i%3==0]
print(r1)
print(r2)
|
Code/Introduction To Python/4 - Conditionals and Looping.ipynb
|
vbarua/PythonWorkshop
|
mit
|
Load the data
As a first step we will load a large dataset using dask. If you have followed the setup instructions you will have downloaded a large CSV containing 12 million taxi trips. Let's load this data using dask to create a dataframe ddf:
|
ddf = dd.read_csv('../data/nyc_taxi.csv', parse_dates=['tpep_pickup_datetime'])
ddf['hour'] = ddf.tpep_pickup_datetime.dt.hour
# If your machine is low on RAM (<8GB) don't persist (though everything will be much slower)
ddf = ddf.persist()
print('%s Rows' % len(ddf))
print('Columns:', list(ddf.columns))
|
notebooks/07-working-with-large-datasets.ipynb
|
ioam/scipy-2017-holoviews-tutorial
|
bsd-3-clause
|
Create a dataset
In previous sections we have already seen how to declare a set of Points from a pandas DataFrame. Here we do the same for a Dask dataframe passed in with the desired key dimensions:
|
points = hv.Points(ddf, kdims=['dropoff_x', 'dropoff_y'])
|
notebooks/07-working-with-large-datasets.ipynb
|
ioam/scipy-2017-holoviews-tutorial
|
bsd-3-clause
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.