markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Explore the Data Play around with view_sentence_range to view different parts of the data.
view_sentence_range = (0, 10) """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np # source_text.split() with empty string to get all words without the '\n' # source_text.split('\n') to get all sentences print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in...
project4/files/dlnd_language_translation.ipynb
myfunprograms/deep_learning
gpl-3.0
Implement Preprocessing Function Text to Word Ids As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of each sentence fr...
def text_to_ids_helper(text, vocab_to_int, appendEOS=False): sentences = [sentence if not appendEOS else sentence + ' <EOS>' for sentence in text.split('\n')] ids = [[vocab_to_int[word] for word in sentence.split()] for sentence in sentences] return ids def text_to_ids(source_text, target_text, source_voca...
project4/files/dlnd_language_translation.ipynb
myfunprograms/deep_learning
gpl-3.0
Process Decoding Input Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the begining of each batch.
def process_decoding_input(target_data, target_vocab_to_int, batch_size): """ Preprocess target data for dencoding :param target_data: Target Placehoder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size :return: Preprocessed target data ...
project4/files/dlnd_language_translation.ipynb
myfunprograms/deep_learning
gpl-3.0
Encoding Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob): """ Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :return: RNN state """ # TODO: Implement Function ...
project4/files/dlnd_language_translation.ipynb
myfunprograms/deep_learning
gpl-3.0
Decoding - Training Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob): """ Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded ...
project4/files/dlnd_language_translation.ipynb
myfunprograms/deep_learning
gpl-3.0
Decoding - Inference Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob): """ Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder R...
project4/files/dlnd_language_translation.ipynb
myfunprograms/deep_learning
gpl-3.0
Build the Decoding Layer Implement decoding_layer() to create a Decoder RNN layer. Create RNN cell for decoding using rnn_size and num_layers. Create the output fuction using lambda to transform it's input, logits, to class logits. Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_le...
tf.reset_default_graph() def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob): """ Create decoding layer :param dec_embed_input: Decoder embedded input :param dec_embeddings: Decoder embed...
project4/files/dlnd_language_translation.ipynb
myfunprograms/deep_learning
gpl-3.0
Build the Neural Network Apply the functions you implemented above to: Apply embedding to the input data for the encoder. Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob). Process target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function...
def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int): """ Build the Sequence-to-Sequence part of the neural network :param input_data: Inpu...
project4/files/dlnd_language_translation.ipynb
myfunprograms/deep_learning
gpl-3.0
Neural Network Training Hyperparameters Tune the following parameters: Set epochs to the number of epochs. Set batch_size to the batch size. Set rnn_size to the size of the RNNs. Set num_layers to the number of layers. Set encoding_embedding_size to the size of the embedding for the encoder. Set decoding_embedding_siz...
# Number of Epochs epochs = 5 # Batch Size batch_size = 512 # RNN Size rnn_size = 512 # Number of Layers num_layers = 1 # Embedding Size: according to unique words (Is this assumption valid?) encoding_embedding_size = 300 decoding_embedding_size = 300 # Learning Rate learning_rate = 0.001 # Dropout Keep Probability kee...
project4/files/dlnd_language_translation.ipynb
myfunprograms/deep_learning
gpl-3.0
Sentence to Sequence To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences. Convert the sentence to lowercase Convert words into ids using vocab_to_int Convert words not in the vocabulary, to the &lt;UNK&gt; word id.
def sentence_to_seq(sentence, vocab_to_int): """ Convert a sentence to a sequence of ids :param sentence: String :param vocab_to_int: Dictionary to go from the words to an id :return: List of word ids """ # TODO: Implement Function id_UNK = vocab_to_int['<UNK>'] ids = [vocab_to_int.g...
project4/files/dlnd_language_translation.ipynb
myfunprograms/deep_learning
gpl-3.0
Step 2: Calculate the regression coefficients
#Reshape the data g_list = np.reshape([g[0,0] for g in gs], (N,1)) As_list = np.reshape(As, (N,4)) Bs_list = np.reshape(Bs, (N,2)) Ps_list = np.concatenate((np.ones((N,1)), As_list[:,:1], As_list[:,2:], Bs_list), axis=1) from numpy.linalg import inv RC = np.dot( np.dot( inv( (np.dot(Ps_list.T, Ps_list)) ), Ps_list.T)...
Code/SSRC_LCA_evelynegroen.ipynb
evelynegroen/evelynegroen.github.io
mit
Step 3: calculate the squared standardized regression coefficients
import statistics as stats var_g = stats.variance(g_list[:,0]) var_x = [stats.variance(Ps_list[:,k]) for k in range(1,6)] SSRC = (var_x/var_g) * (RC[1:6,0]**2) print("squared standardized regression coefficients:", SSRC)
Code/SSRC_LCA_evelynegroen.ipynb
evelynegroen/evelynegroen.github.io
mit
Visualize
import matplotlib.pyplot as plt import matplotlib.pyplot as plt SSRC_procent = SSRC * 100 x_label=[ 'A(1,1)', 'A(2,1)', 'A(2,2)', 'B(1,1)', 'B(1,2)'] x_pos = range(5) plt.bar(x_pos, SSRC_procent, align='center') plt.xticks(x_pos, x_label) plt.title('Global sensitivity analysis: squared standardized regression coeffic...
Code/SSRC_LCA_evelynegroen.ipynb
evelynegroen/evelynegroen.github.io
mit
B-DNA Double-Stranded Parameters
r = 1 # (nm) dsDNA radius δ = 0.34 # (nm) dsDNA base-pair pitch n = 10.5 # number of bases per turn Δφ = 2.31 # (radiants) minor-grove angle between the two strands backbones φ = 2*pi/n # (radiants) rotation for base-pair
DNA model.ipynb
tritemio/multispot_paper
mit
Dye and Linker Geometry <img src="figures/DNA1.png" style="width:300px;float:left;"> <img src="figures/DNA2.png" style="width:300px;float:left;"> <img src="figures/DNA3.png" style="width:300px;float:left;"> Fraction for the segment $SH_1$ orver $H_1H_2$:
def dye_position(i, l=1.6, λ=0.5, ψ=0): # global structural params: r, δ, n, Δφ φ = 2*pi/n # (radiants) rotation for base-pair Dx = r * cos(φ*i) + λ*( r*cos(φ*i + Δφ) - r*cos(φ*i) ) + l*cos(ψ)*cos(φ*i + 0.5*Δφ) Dy = r * sin(φ*i) + λ*( r*sin(φ*i + Δφ) - r*sin(φ*i) ) + l*cos(ψ)*sin(φ*i + 0.5*Δφ) ...
DNA model.ipynb
tritemio/multispot_paper
mit
Donor
λ = 0.5 ψ = 0 i = 7 # number of bases from reference "base 0" l = 1.6 # (nm) distance between S and dye position D dye_position(7) bp = np.arange(0, 40) PD = dye_position(bp) PA = dye_position(bp, l=1.6, λ=0.5, ψ=pi) axes = plot_dye(PD) plot_dye(PA, axes, color='r'); bp = np.arange(0, 40, 0.1) PD = dye_posit...
DNA model.ipynb
tritemio/multispot_paper
mit
Alright in this section we're going to continue with the running data set but we're going to dive a bit deeper into ways of analyzing the data including filtering, dropping rows, doing some groupings and that sort of thing. So what we'll do is read in our csv file.
pd.read_csv? list(range(1,7)) df = pd.read_csv('../data/date_fixed_running_data_with_time.csv', parse_dates=['Date'], usecols=list(range(0,6))) df.dtypes df.sort(inplace=True) df.head()
4 - pandas Basics/4-7 pandas DataFrame Summary Statistics, Filtering, Dropping and adding Rows and Columns, Grouping Basics.ipynb
mitchshack/data_analysis_with_python_and_pandas
apache-2.0
Now let's about getting some summary statistics. I would encourage you to try these on your own. We've learned pretty much everything we need to in order to be able to do these on without guidance and as always if you need clarification just ask on the side. What was the longest run in miles and minutes that I ran?
df.Minutes.max() df.Miles.max()
4 - pandas Basics/4-7 pandas DataFrame Summary Statistics, Filtering, Dropping and adding Rows and Columns, Grouping Basics.ipynb
mitchshack/data_analysis_with_python_and_pandas
apache-2.0
What about the shortest in miles and minutes that I ran?
df.Minutes.min() df.Miles.min()
4 - pandas Basics/4-7 pandas DataFrame Summary Statistics, Filtering, Dropping and adding Rows and Columns, Grouping Basics.ipynb
mitchshack/data_analysis_with_python_and_pandas
apache-2.0
We forgot to ignore our null values, so how would we do it by ignoring those?
df.Miles[df.Miles > 0].min()
4 - pandas Basics/4-7 pandas DataFrame Summary Statistics, Filtering, Dropping and adding Rows and Columns, Grouping Basics.ipynb
mitchshack/data_analysis_with_python_and_pandas
apache-2.0
What was the most common running distance I did excluding times when I didn't run at all.
df.Miles[df.Miles > 0].value_counts().index[0]
4 - pandas Basics/4-7 pandas DataFrame Summary Statistics, Filtering, Dropping and adding Rows and Columns, Grouping Basics.ipynb
mitchshack/data_analysis_with_python_and_pandas
apache-2.0
Plot a graph of the cumulative running distance in this dataset.
df.Miles.cumsum().plot() plt.xlabel("Day Number") plt.ylabel("Distance")
4 - pandas Basics/4-7 pandas DataFrame Summary Statistics, Filtering, Dropping and adding Rows and Columns, Grouping Basics.ipynb
mitchshack/data_analysis_with_python_and_pandas
apache-2.0
Plot a graph of the cumulative running hours in this data set.
(df.Minutes.fillna(0).cumsum() / 60).plot()
4 - pandas Basics/4-7 pandas DataFrame Summary Statistics, Filtering, Dropping and adding Rows and Columns, Grouping Basics.ipynb
mitchshack/data_analysis_with_python_and_pandas
apache-2.0
Another interesting question we could ask is what days of the week do I commonly go for runs. Am I faster on certain days or does my speed improve over time relative to the distance that I'm running. So let's get our days of the week
df.Date[0].strftime("%A")
4 - pandas Basics/4-7 pandas DataFrame Summary Statistics, Filtering, Dropping and adding Rows and Columns, Grouping Basics.ipynb
mitchshack/data_analysis_with_python_and_pandas
apache-2.0
We will do that by mapping our date column to a the time format we need
df.Date.map(lambda x: x.strftime("%A")).head()
4 - pandas Basics/4-7 pandas DataFrame Summary Statistics, Filtering, Dropping and adding Rows and Columns, Grouping Basics.ipynb
mitchshack/data_analysis_with_python_and_pandas
apache-2.0
then we just set that to a new column.
df['Day_of_week'] = df.Date.map(lambda x: x.strftime("%A")) df.head(10)
4 - pandas Basics/4-7 pandas DataFrame Summary Statistics, Filtering, Dropping and adding Rows and Columns, Grouping Basics.ipynb
mitchshack/data_analysis_with_python_and_pandas
apache-2.0
and we can make a bar plot of it, but let's see if we can distinguish anything unique about certain days of the week.
df[df.Miles > 0].Day_of_week.value_counts().plot(kind='bar')
4 - pandas Basics/4-7 pandas DataFrame Summary Statistics, Filtering, Dropping and adding Rows and Columns, Grouping Basics.ipynb
mitchshack/data_analysis_with_python_and_pandas
apache-2.0
We will do that by creating groups of data frames We can see that in this sample I run a lot more on the Friday Saturday and Monday. Some interesting patterns. Why don't we try looking at the means and that sort of thing. But before we get there, at this point, our data frame is getting pretty messy and I think it's wo...
del(df['Time'])
4 - pandas Basics/4-7 pandas DataFrame Summary Statistics, Filtering, Dropping and adding Rows and Columns, Grouping Basics.ipynb
mitchshack/data_analysis_with_python_and_pandas
apache-2.0
del will delete it in place
df.head()
4 - pandas Basics/4-7 pandas DataFrame Summary Statistics, Filtering, Dropping and adding Rows and Columns, Grouping Basics.ipynb
mitchshack/data_analysis_with_python_and_pandas
apache-2.0
Finally we can use drop to drop a column. Now we have to specify the axis( we can also use this to drop rows), now this does not happen in place.
df.drop('Seconds',axis=1)
4 - pandas Basics/4-7 pandas DataFrame Summary Statistics, Filtering, Dropping and adding Rows and Columns, Grouping Basics.ipynb
mitchshack/data_analysis_with_python_and_pandas
apache-2.0
we can also use drop to drop a specific row by specifying the 0 axis
tempdf = pd.DataFrame(np.arange(4).reshape(2,2)) tempdf tempdf.drop(1,axis=0)
4 - pandas Basics/4-7 pandas DataFrame Summary Statistics, Filtering, Dropping and adding Rows and Columns, Grouping Basics.ipynb
mitchshack/data_analysis_with_python_and_pandas
apache-2.0
we Already saw how to create a new column, we can also create a new row using the append method. This takes in a data frame or Series and appends it to the end of the data frame.
tempdf.append(pd.Series([4,5]), ignore_index=True) df.head()
4 - pandas Basics/4-7 pandas DataFrame Summary Statistics, Filtering, Dropping and adding Rows and Columns, Grouping Basics.ipynb
mitchshack/data_analysis_with_python_and_pandas
apache-2.0
We can also pop out a column which will remove it from a data frame and return the Series. You'll see that it happens in place.
df.pop('Seconds') df.head()
4 - pandas Basics/4-7 pandas DataFrame Summary Statistics, Filtering, Dropping and adding Rows and Columns, Grouping Basics.ipynb
mitchshack/data_analysis_with_python_and_pandas
apache-2.0
Now we've made our dataset a bit more manageable. We've kind of just got the basics of what we need to perform some groupwise analysis. Now at this point we're going to do some groupings. This is an extremely powerful part of pandas and one that you'll use all the time. pandas follows the the Split-Apply-Combine style ...
for dow in df.Day_of_week.unique(): print(dow) print(df[df.Day_of_week == dow]) break
4 - pandas Basics/4-7 pandas DataFrame Summary Statistics, Filtering, Dropping and adding Rows and Columns, Grouping Basics.ipynb
mitchshack/data_analysis_with_python_and_pandas
apache-2.0
This is clearly an ugly way to do this and pandas provides a much more simple way of approaching this problem. by creating a groupby object. But first I'm going to filter out our zero values because they'll throw off our analysis.
df['Miles'] = df.Miles[df.Miles > 0] dows = df.groupby('Day_of_week') print(dows)
4 - pandas Basics/4-7 pandas DataFrame Summary Statistics, Filtering, Dropping and adding Rows and Columns, Grouping Basics.ipynb
mitchshack/data_analysis_with_python_and_pandas
apache-2.0
We can get the size of each one by using the size command. This basically tells us how many items are in each category.
dows.size() dows.count()
4 - pandas Basics/4-7 pandas DataFrame Summary Statistics, Filtering, Dropping and adding Rows and Columns, Grouping Basics.ipynb
mitchshack/data_analysis_with_python_and_pandas
apache-2.0
Now we have our groups and we can start doing groupwise analysis, now what does that mean? It means we can start answering questions like what is the average speed per weekday or what is the total miles run per weekday?
dows.mean() dows.sum()
4 - pandas Basics/4-7 pandas DataFrame Summary Statistics, Filtering, Dropping and adding Rows and Columns, Grouping Basics.ipynb
mitchshack/data_analysis_with_python_and_pandas
apache-2.0
It might be interesting to see the total sum of the amount of runs to try and see any outliers simply because Thursday, Friday, Saturday are close in distances, relatively, but not so much in speed. We also get access to a lot of summary statistics from here that we can get from the groups.
dows.describe() df.groupby('Day_of_week').mean() df.groupby('Day_of_week').std()
4 - pandas Basics/4-7 pandas DataFrame Summary Statistics, Filtering, Dropping and adding Rows and Columns, Grouping Basics.ipynb
mitchshack/data_analysis_with_python_and_pandas
apache-2.0
iterating through the groups is also very straightforward
for name, group in dows: print(name) print(group)
4 - pandas Basics/4-7 pandas DataFrame Summary Statistics, Filtering, Dropping and adding Rows and Columns, Grouping Basics.ipynb
mitchshack/data_analysis_with_python_and_pandas
apache-2.0
you can get specific groups by using the get_group method.
dows.get_group('Friday')
4 - pandas Basics/4-7 pandas DataFrame Summary Statistics, Filtering, Dropping and adding Rows and Columns, Grouping Basics.ipynb
mitchshack/data_analysis_with_python_and_pandas
apache-2.0
We can use an aggregation command to perform an operation to get all the counts for each data frame.
dows.agg(lambda x: len(x))['Miles']
4 - pandas Basics/4-7 pandas DataFrame Summary Statistics, Filtering, Dropping and adding Rows and Columns, Grouping Basics.ipynb
mitchshack/data_analysis_with_python_and_pandas
apache-2.0
another way to do this would be to add a count column to our data frame, then sum up each column
df['Count'] = 1 df.head(10) df.groupby('Day_of_week').sum()
4 - pandas Basics/4-7 pandas DataFrame Summary Statistics, Filtering, Dropping and adding Rows and Columns, Grouping Basics.ipynb
mitchshack/data_analysis_with_python_and_pandas
apache-2.0
For the purposes of this notebook we are going to use one of the predefined objective functions that come with GPyOpt. However the key thing to realize is that the function could be anything (e.g., the results of a physical experiment). As long as users are able to externally evaluate the suggested points somehow and p...
func = GPyOpt.objective_examples.experiments1d.forrester()
manual/GPyOpt_external_objective_evaluation.ipynb
SheffieldML/GPyOpt
bsd-3-clause
Now we define the domain of the function to optimize as usual.
domain =[{'name': 'var1', 'type': 'continuous', 'domain': (0,1)}]
manual/GPyOpt_external_objective_evaluation.ipynb
SheffieldML/GPyOpt
bsd-3-clause
First we are going to run the optimization loop outside of GPyOpt, and only use GPyOpt to get the next point to evaluate our function. There are two thing to pay attention to when creating the main optimization object: * Objective function f is explicitly set to None * Since we recreate the object anew for each iterati...
X_init = np.array([[0.0],[0.5],[1.0]]) Y_init = func.f(X_init) iter_count = 10 current_iter = 0 X_step = X_init Y_step = Y_init while current_iter < iter_count: bo_step = GPyOpt.methods.BayesianOptimization(f = None, domain = domain, X = X_step, Y = Y_step) x_next = bo_step.suggest_next_locations() y_next...
manual/GPyOpt_external_objective_evaluation.ipynb
SheffieldML/GPyOpt
bsd-3-clause
Let's visualize the results. The size of the marker denotes the order in which the point was evaluated - the bigger the marker the later was the evaluation.
x = np.arange(0.0, 1.0, 0.01) y = func.f(x) plt.figure() plt.plot(x, y) for i, (xs, ys) in enumerate(zip(X_step, Y_step)): plt.plot(xs, ys, 'rD', markersize=10 + 20 * (i+1)/len(X_step))
manual/GPyOpt_external_objective_evaluation.ipynb
SheffieldML/GPyOpt
bsd-3-clause
To compare the results, let's now execute the whole loop with GPyOpt.
bo_loop = GPyOpt.methods.BayesianOptimization(f = func.f, domain = domain, X = X_init, Y = Y_init) bo_loop.run_optimization(max_iter=iter_count) X_loop = bo_loop.X Y_loop = bo_loop.Y
manual/GPyOpt_external_objective_evaluation.ipynb
SheffieldML/GPyOpt
bsd-3-clause
Now let's print the results of this optimization and compare to the previous external evaluation run. As before, size of the marker corresponds to its evaluation order.
plt.figure() plt.plot(x, y) for i, (xl, yl) in enumerate(zip(X_loop, Y_loop)): plt.plot(xl, yl, 'rD', markersize=10 + 20 * (i+1)/len(X_step))
manual/GPyOpt_external_objective_evaluation.ipynb
SheffieldML/GPyOpt
bsd-3-clause
To allow even more control over the execution, this API allows to specify points that should be ignored (say the objetive is known to fail in certain locations), as well as points that are already pending evaluation (say in case the user is running several candidates in parallel). Here is how one can provide this infor...
pending_X = np.array([[0.75]]) ignored_X = np.array([[0.15], [0.85]]) bo = GPyOpt.methods.BayesianOptimization(f = None, domain = domain, X = X_step, Y = Y_step, de_duplication = True) bo.suggest_next_locations(pending_X = pending_X, ignored_X = ignored_X)
manual/GPyOpt_external_objective_evaluation.ipynb
SheffieldML/GPyOpt
bsd-3-clause
HTML Python objects can declare HTML representations that will be displayed in the Notebook. If you have some HTML you want to display, simply use the HTML class.
from IPython.display import HTML s = """<table> <tr> <th>Header 1</th> <th>Header 2</th> </tr> <tr> <td>row 1, cell 1</td> <td>row 1, cell 2</td> </tr> <tr> <td>row 2, cell 1</td> <td>row 2, cell 2</td> </tr> </table>""" h = HTML(s) display_HTML(h)
days/day08/Display.ipynb
rvperry/phys202-2015-work
mit
Define some PV system parameters.
surface_tilt = 30 surface_azimuth = 180 # pvlib uses 0=North, 90=East, 180=South, 270=West convention albedo = 0.2 start = datetime.now() # today's date end = start + timedelta(days=7) # 7 days from today timerange = pd.date_range(start, end, tz=tz) # Define forecast model fm = GFS() #fm = NAM() #fm = NDFD() #fm = RA...
docs/tutorials/notebooks/forecast_to_power.ipynb
MoonRaker/pvlib-python
bsd-3-clause
This is a pandas DataFrame object. It has a lot of great properties that are beyond the scope of our tutorials.
forecast_data['temperature'].plot()
docs/tutorials/notebooks/forecast_to_power.ipynb
MoonRaker/pvlib-python
bsd-3-clause
Plot the GHI data
ghi = forecast_data['ghi'] ghi.plot() plt.ylabel('Irradiance ($W/m^{-2}$)')
docs/tutorials/notebooks/forecast_to_power.ipynb
MoonRaker/pvlib-python
bsd-3-clause
Calculate modeling intermediates Before we can calculate power for all the forecast times, we will need to calculate: * solar position * extra terrestrial radiation * airmass * angle of incidence * POA sky and ground diffuse radiation * cell and module temperatures The approach here follows that of the pvlib tmy_to_po...
# retrieve time and location parameters time = forecast_data.index a_point = fm.location solpos = solarposition.get_solarposition(time, a_point) #solpos.plot()
docs/tutorials/notebooks/forecast_to_power.ipynb
MoonRaker/pvlib-python
bsd-3-clause
The funny looking jump in the azimuth is just due to the coarse time sampling in the TMY file. DNI ET Calculate extra terrestrial radiation. This is needed for many plane of array diffuse irradiance models.
dni_extra = irradiance.extraradiation(fm.time) dni_extra = pd.Series(dni_extra, index=fm.time) #dni_extra.plot() #plt.ylabel('Extra terrestrial radiation ($W/m^{-2}$)')
docs/tutorials/notebooks/forecast_to_power.ipynb
MoonRaker/pvlib-python
bsd-3-clause
Cell and module temperature Calculate pv cell and module temperature
temperature = forecast_data['temperature'] wnd_spd = forecast_data['wind_speed'] pvtemps = pvsystem.sapm_celltemp(poa_irrad['poa_global'], wnd_spd, temperature) pvtemps.plot() plt.ylabel('Temperature (C)')
docs/tutorials/notebooks/forecast_to_power.ipynb
MoonRaker/pvlib-python
bsd-3-clause
DC power using SAPM Get module data from the web.
sandia_modules = pvsystem.retrieve_sam(name='SandiaMod')
docs/tutorials/notebooks/forecast_to_power.ipynb
MoonRaker/pvlib-python
bsd-3-clause
Run the SAPM using the parameters we calculated above.
sapm_out = pvsystem.sapm(sandia_module, poa_irrad.poa_direct, poa_irrad.poa_diffuse, pvtemps['temp_cell'], airmass, aoi) #print(sapm_out.head()) sapm_out[['p_mp']].plot() plt.ylabel('DC Power (W)')
docs/tutorials/notebooks/forecast_to_power.ipynb
MoonRaker/pvlib-python
bsd-3-clause
DC power using single diode
cec_modules = pvsystem.retrieve_sam(name='CECMod') cec_module = cec_modules.Canadian_Solar_CS5P_220M photocurrent, saturation_current, resistance_series, resistance_shunt, nNsVth = ( pvsystem.calcparams_desoto(poa_irrad.poa_global, temp_cell=pvtemps['temp_cell'], ...
docs/tutorials/notebooks/forecast_to_power.ipynb
MoonRaker/pvlib-python
bsd-3-clause
Choose a particular inverter
sapm_inverter = sapm_inverters['ABB__MICRO_0_25_I_OUTD_US_208_208V__CEC_2014_'] sapm_inverter p_acs = pd.DataFrame() p_acs['sapm'] = pvsystem.snlinverter(sapm_inverter, sapm_out.v_mp, sapm_out.p_mp) p_acs['sd'] = pvsystem.snlinverter(sapm_inverter, single_diode_out.v_mp, single_diode_out.p_mp) p_acs.plot() plt.ylabel...
docs/tutorials/notebooks/forecast_to_power.ipynb
MoonRaker/pvlib-python
bsd-3-clause
Plot just a few days.
p_acs[start:start+timedelta(days=2)].plot()
docs/tutorials/notebooks/forecast_to_power.ipynb
MoonRaker/pvlib-python
bsd-3-clause
Some statistics on the AC power
p_acs.describe() p_acs.sum()
docs/tutorials/notebooks/forecast_to_power.ipynb
MoonRaker/pvlib-python
bsd-3-clause
Authenticate against the ADH API ADH documentation
#!/usr/bin/python # # Copyright 2017 Google Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required b...
pills/Google Ads/[DATA_PILL]_[Google_Ads]_Frequency_and_Audience_Analysis_(ADH).ipynb
google/data-pills
apache-2.0
Frequency Analysis <b>Purpose:</b> This tool should be used to guide you defining an optimal frequency cap considering the CTR curve. Due to that it is more useful in awareness use cases. Key notes For some campaings the user ID will be <b>zeroed</b> (e.g. Googel Data, ITP browsers and YouTube Data), therefore <b>excl...
#@title Define ADH configuration parameters customer_id = 000000001 #@param query_name = 'test1' #@param big_query_project = 'adh-scratch' #@param Destination Project ID big_query_dataset = 'test' #@param Destination Dataset big_query_destination_table = 'freqanalysis_test' #@param Destination Table big_query_destinati...
pills/Google Ads/[DATA_PILL]_[Google_Ads]_Frequency_and_Audience_Analysis_(ADH).ipynb
google/data-pills
apache-2.0
Step 2 - Create a function for the final calculations From DT data Calculate metrics using pandas Pass through the pandas dataframe when you call this function
def df_calc_fields(df): df['ctr'] = df.clicks / df.impressions df['cpc'] = df.cost / df.clicks df['cumulative_clicks'] = df.clicks.cumsum() df['cumulative_impressions'] = df.impressions.cumsum() df['cumulative_reach'] = df.reach.cumsum() df['cumulative_cost'] = df.cost.cumsum() df['coverag...
pills/Google Ads/[DATA_PILL]_[Google_Ads]_Frequency_and_Audience_Analysis_(ADH).ipynb
google/data-pills
apache-2.0
Step 3 - Build the query Step 3a - Query for reach & frequency Set up the vairables
# Build the query dc = {} if (IDs == ""): dc['ID_filters'] = "" else: dc['id_type'] = id_type dc['IDs'] = IDs dc['ID_filters'] = '''AND {id_type} IN ({IDs})'''.format(**dc) #create global query list global_query_name = []
pills/Google Ads/[DATA_PILL]_[Google_Ads]_Frequency_and_Audience_Analysis_(ADH).ipynb
google/data-pills
apache-2.0
Part 1 - Find all impressions from the impression table: * Select all user IDs from the impression table * Select the event_time * Mark the interaction type as 'imp' for all of these rows * Filter for the dates set in Step 1 using the partition files to reduce bigQuery costs by only searching in files within a 2 day i...
q1 = """ WITH imp_u_clicks AS ( SELECT user_id, query_id.time_usec AS interaction_time, 0 AS cost, 'imp' AS interaction_type FROM adh.google_ads_impressions WHERE user_id != '0' {ID_filters} """
pills/Google Ads/[DATA_PILL]_[Google_Ads]_Frequency_and_Audience_Analysis_(ADH).ipynb
google/data-pills
apache-2.0
Part 2 - Find all clicks from the clicks table: Select all User IDs from the click table Select the event_time Mark the interaction type as 'click' for all of these rows Filter for the dates set in Step 1 using the partition files to reduce BigQuery costs by only searching in files within a 2 day interval of the set ...
q2 = """ UNION ALL ( SELECT user_id, click_id.time_usec AS interaction_time, advertiser_click_cost_usd AS cost, 'click' AS interaction_type FROM adh.google_ads_clicks WHERE user_id != '0' AND impression_data.{id_type} IN ({IDs}) ) ...
pills/Google Ads/[DATA_PILL]_[Google_Ads]_Frequency_and_Audience_Analysis_(ADH).ipynb
google/data-pills
apache-2.0
output example: <table> <tr> <th>USER_ID</th> <th>interaction_time</th> <th>interaction_type</th> </tr> <tr> <td>001</td> <td>timestamp</td> <td>impression</td> </tr> <tr> <td>001</td> <td>timestamp</td> <td>impression</td> </tr> <tr> <td>001</td> <td>time...
q3 = """ user_level_data AS ( SELECT user_id, SUM(IF(interaction_type = 'imp', 1, 0)) AS impressions, SUM(IF(interaction_type = 'click', 1, 0)) AS clicks, SUM(cost) AS cost FROM imp_u_clicks GROUP BY ...
pills/Google Ads/[DATA_PILL]_[Google_Ads]_Frequency_and_Audience_Analysis_(ADH).ipynb
google/data-pills
apache-2.0
output example: <table> <tr> <th>USER_ID</th> <th>impressions</th> <th>clicks</th> </tr> <tr> <td>001</td> <td>3</td> <td>1</td> </tr> <tr> <td>002</td> <td>1</td> <td>1</td> </tr> <tr> <td>003</td> <td>1</td> <td>0</td> </tr> </table> Part 4 ...
q4 = """ SELECT impressions AS frequency, SUM(clicks) AS clicks, SUM(impressions) AS impressions, COUNT(*) AS reach, SUM(cost) AS cost FROM user_level_data GROUP BY 1 ORDER BY frequency ASC """
pills/Google Ads/[DATA_PILL]_[Google_Ads]_Frequency_and_Audience_Analysis_(ADH).ipynb
google/data-pills
apache-2.0
Create the query required for ADH * When working with ADH the standard BigQuery query needs to be adapted to run in ADH * This can be done bia the API
import datetime try: full_circle_query = GetService() except IOError, ex: print 'Unable to create ads data hub service - %s' % ex print 'Did you specify the client secrets file?' sys.exit(1) d = datetime.datetime.today() query_create_body = { 'name': query_name + '_' + d.strftime('%d-%m-%Y') + '_freq'...
pills/Google Ads/[DATA_PILL]_[Google_Ads]_Frequency_and_Audience_Analysis_(ADH).ipynb
google/data-pills
apache-2.0
Step 3b - Query for demographics & interest
qb1 = """ SELECT a.affinity_category, COUNT(imp.query_id.time_usec) AS impression, COUNT(clk.click_id.time_usec) AS clicks, COUNT(DISTINCT imp.user_id) AS reach, COUNT(imp.query_id.time_usec)/COUNT(DISTINCT imp.user_id) AS frequency, COUNT(clk.click_id.time_usec) /COUNT(imp.query_id.time_usec)...
pills/Google Ads/[DATA_PILL]_[Google_Ads]_Frequency_and_Audience_Analysis_(ADH).ipynb
google/data-pills
apache-2.0
output example: <table> <tr> <th>affinity</th> <th>impression</th> <th>click</th> <th>reach</th> <th>frequency</th> <th>ctr</th> </tr> <tr> <td>Gamers</td> <td>10</td> <td>2</td> <td>5</td> <td>2</td> <td>0.2</td> </tr> <tr> <td>Music Lovers</td> <td>20...
qb2 = """ SELECT a.in_market_category, COUNT(imp.query_id.time_usec) AS impression, COUNT(clk.click_id.time_usec) AS clicks, COUNT(DISTINCT imp.user_id) AS reach, COUNT(imp.query_id.time_usec)/COUNT(DISTINCT imp.user_id) AS frequency, COUNT(clk.click_id.time_usec) /COUNT(imp.query_id.time_usec...
pills/Google Ads/[DATA_PILL]_[Google_Ads]_Frequency_and_Audience_Analysis_(ADH).ipynb
google/data-pills
apache-2.0
output example: <table> <tr> <th>affinity</th> <th>impression</th> <th>click</th> <th>reach</th> <th>frequency</th> <th>ctr</th> </tr> <tr> <td>SUVs</td> <td>10</td> <td>2</td> <td>5</td> <td>2</td> <td>0.2</td> </tr> <tr> <td>Home & Garden</td> <td>20<...
qb3 = """ SELECT gen.gender_name, age.age_group_name, COUNT(imp.query_id.time_usec) AS impression, COUNT(clk.click_id.time_usec) AS clicks, COUNT(DISTINCT imp.user_id) AS reach, COUNT(imp.query_id.time_usec)/COUNT(DISTINCT imp.user_id) AS frequency, COUNT(clk.click_id.time_usec) /COUNT(imp...
pills/Google Ads/[DATA_PILL]_[Google_Ads]_Frequency_and_Audience_Analysis_(ADH).ipynb
google/data-pills
apache-2.0
output example: <table> <tr> <th>affinity</th> <th>impression</th> <th>click</th> <th>reach</th> <th>frequency</th> <th>ctr</th> </tr> <tr> <td>SUVs</td> <td>10</td> <td>2</td> <td>5</td> <td>2</td> <td>0.2</td> </tr> <tr> <td>Home & Garden</td> <td>20<...
destination_table_full_path = big_query_project + '.' + big_query_dataset + '.' + big_query_destination_table destination_table_full_path_affinity = big_query_project + '.' + big_query_dataset + '.' + big_query_destination_table_affinity destination_table_full_path_inmarket = big_query_project + '.' + big_query_dataset...
pills/Google Ads/[DATA_PILL]_[Google_Ads]_Frequency_and_Audience_Analysis_(ADH).ipynb
google/data-pills
apache-2.0
Step 5 - Retrieve the table from BigQuery Retrieve the results from BigQuery Check to make sure the query has finished running and is saved in the new BigQuery Table When it is done we cane retrieve it
import time statusDone = False while statusDone is False: print("waiting for the job to complete...") updatedOperation = full_circle_query.operations().get(name=operation['name']).execute() if updatedOperation.has_key('done') and updatedOperation['done'] == True: statusDone = True time.sleep(5) print("Job...
pills/Google Ads/[DATA_PILL]_[Google_Ads]_Frequency_and_Audience_Analysis_(ADH).ipynb
google/data-pills
apache-2.0
We are using the pandas library to run the query. We pass in the query (q), the project id and set the SQL language to 'standard' (as opposed to legacy SQL)
# Run query as save as a table (also known as dataframe) df = pd.io.gbq.read_gbq(q1, project_id=big_query_project, dialect='standard', reauth=True) dfs = [pd.io.gbq.read_gbq(q, project_id=big_query_project, dialect='standard', reauth=True) for q in qs] for i in range(4): print(dfs[i])
pills/Google Ads/[DATA_PILL]_[Google_Ads]_Frequency_and_Audience_Analysis_(ADH).ipynb
google/data-pills
apache-2.0
Save the output as a CSV
# Save the original dataframe as a csv file in case you need to recover the original data dfs[0].to_csv('data_reach_freq.csv', index=False) dfs[1].to_csv('data_affinity.csv', index=False) dfs[2].to_csv('data_inmarket.csv', index=False) dfs[3].to_csv('data_age_gender.csv', index=False)
pills/Google Ads/[DATA_PILL]_[Google_Ads]_Frequency_and_Audience_Analysis_(ADH).ipynb
google/data-pills
apache-2.0
Setup up the dataframe and preview to check the data.
#prepare reach & frequency data df = pd.read_csv('data_reach_freq.csv') print(df.head()) #prepare affinity data df3 = pd.read_csv('data_affinity.csv') print(df3.head()) #prepare in_market data df4 = pd.read_csv('data_inmarket.csv') print(df4.head()) #prepare age & gender data df5 = pd.read_csv('data_age_gender.csv') pr...
pills/Google Ads/[DATA_PILL]_[Google_Ads]_Frequency_and_Audience_Analysis_(ADH).ipynb
google/data-pills
apache-2.0
Step 6 - Set up the data and all the charts that will be plotted 6.1 Transform data Use the calculation function created to calculate all the values based off your data
df = df[:max_freq+1] # Reduces the dataframe to have the size you set as the maximum frequency (max_freq) df = df_calc_fields(df) df2 = df.copy() # Copy the dataframe you calculated the fields in case you need to recover it graphs = [] # Variable to save all graphics df.head()
pills/Google Ads/[DATA_PILL]_[Google_Ads]_Frequency_and_Audience_Analysis_(ADH).ipynb
google/data-pills
apache-2.0
Analysis 3: Determine impressions outside optimal frequency Step 1: Define parameter to be the Optimal Frequency This parameter below will guide the analysis of media loss talking about impressions. We will calculate the percentage of impressions that are out of the number you set as the optimal frequency.
#@title 1.1 - Optimal Frequency optimal_freq = 3#@param {type:"integer", allow-input: true} slider_value = 1 #@param test if optimal_freq > len(df2): raise Exception('Your optimal frequency is higher than the maxmium frequency in your campaign please make sure it is lower than {}'.format(len(df2)))
pills/Google Ads/[DATA_PILL]_[Google_Ads]_Frequency_and_Audience_Analysis_(ADH).ipynb
google/data-pills
apache-2.0
Output: Calculate impression loss
from __future__ import division df2 = df_calc_fields(df2) df_opt, df_not_opt = df[:optimal_freq], df[optimal_freq:] total_impressions = list(df2.cumulative_impressions)[-1] total_imp_not_opt = list(df_not_opt.cumulative_impressions)[-1] - list(df_opt.cumulative_impressions)[-1] imp_not_opt_ratio = total_imp_not_opt ...
pills/Google Ads/[DATA_PILL]_[Google_Ads]_Frequency_and_Audience_Analysis_(ADH).ipynb
google/data-pills
apache-2.0
Analysis 4: Determine which affinity work best Step 1: Set up charts
# calculate the color scale diff_affinity = df3.ctr.max(0) - df3.ctr.min(0) df3['colorscale'] = (df3.ctr - df3.ctr.min(0))/ diff_affinity *40 + 130 diff_in_market = df4.ctr.max(0) - df4.ctr.min(0) df4['colorscale'] = (df4.ctr - df4.ctr.min(0))/ diff_in_market *40 + 130 diff_in_market = df5.ctr.max(0) - df5.ctr.min(...
pills/Google Ads/[DATA_PILL]_[Google_Ads]_Frequency_and_Audience_Analysis_(ADH).ipynb
google/data-pills
apache-2.0
Ouput: Visualise the data Affinity bubble chart What affinity is underreaching? What affinity should I switch the cost to?
enable_plotly_in_cell() iplot(graphs3[0])
pills/Google Ads/[DATA_PILL]_[Google_Ads]_Frequency_and_Audience_Analysis_(ADH).ipynb
google/data-pills
apache-2.0
check whether big bubble is in the bottom left corner, those are the affinities with lowest efficency bubble in the upper left corner are the affinities with high potentials you should keep your investment in the upper right hand corner bubble shift budget from bottom right hand corner to other area In market bubble c...
enable_plotly_in_cell() iplot(graphs3[1])
pills/Google Ads/[DATA_PILL]_[Google_Ads]_Frequency_and_Audience_Analysis_(ADH).ipynb
google/data-pills
apache-2.0
check whether big bubble is in the bottom left corner, those are the in market segment with lowest efficency bubble in the upper left corner are the in market segment with high potentials you should keep your investment in the upper right hand corner bubble shift budget from bottom right hand corner to other area age ...
enable_plotly_in_cell() iplot(graphs3[2])
pills/Google Ads/[DATA_PILL]_[Google_Ads]_Frequency_and_Audience_Analysis_(ADH).ipynb
google/data-pills
apache-2.0
加载模型 HanLP的工作流程是先加载模型,模型的标示符存储在hanlp.pretrained这个包中,按照NLP任务归类。
import hanlp hanlp.pretrained.sts.ALL # 语种见名称最后一个字段或相应语料库
plugins/hanlp_demo/hanlp_demo/zh/sts_stl.ipynb
hankcs/HanLP
apache-2.0
调用hanlp.load进行加载,模型会自动下载到本地缓存:
sts = hanlp.load(hanlp.pretrained.sts.STS_ELECTRA_BASE_ZH)
plugins/hanlp_demo/hanlp_demo/zh/sts_stl.ipynb
hankcs/HanLP
apache-2.0
语义文本相似度 输入两段短文本组成的二元组列表,执行语义文本相似度:
sts([ ('看图猜一电影名', '看图猜电影'), ('无线路由器怎么无线上网', '无线上网卡和无线路由器怎么用'), ('北京到上海的动车票', '上海到北京的动车票'), ])
plugins/hanlp_demo/hanlp_demo/zh/sts_stl.ipynb
hankcs/HanLP
apache-2.0
Sometimes you need the index of an element and the element itself while you iterate through a list. This can be achived with the enumerate function.
for i, e in enumerate(greek): print(e + " is at index: " + str(i))
Code/Introduction To Python/4 - Conditionals and Looping.ipynb
vbarua/PythonWorkshop
mit
The above is taking advantage of tuple decomposition, because enumerate generates a list of tuples. Let's look at this more explicitly.
list_of_tuples = [(1,2,3),(4,5,6), (7,8,9)] for (a, b, c) in list_of_tuples: print(a + b + c)
Code/Introduction To Python/4 - Conditionals and Looping.ipynb
vbarua/PythonWorkshop
mit
If you have two or more lists you want to iterate over together you can use the zip function.
for e1, e2 in zip(greek, greek): print("Double Greek: " + e1 + e2)
Code/Introduction To Python/4 - Conditionals and Looping.ipynb
vbarua/PythonWorkshop
mit
You can also iterate over tuples
for i in (1,2,3): print(i)
Code/Introduction To Python/4 - Conditionals and Looping.ipynb
vbarua/PythonWorkshop
mit
and dictionaries in various ways
elements = { "H": "Hydrogen", "He": "Helium", "Li": "Lithium", } for key in elements: # Over the keys print(key) for key, val in elements.items(): # Over the keys and values print(key + ": " + val) for val in elements.values(): print(val)
Code/Introduction To Python/4 - Conditionals and Looping.ipynb
vbarua/PythonWorkshop
mit
Conditionals Conditionals are a way of having certain actions occur only on specific conditions.
x = 5 if x % 2 == 0: # Checks if x is even print(str(x) + " is even!!!") if x % 2 == 1: # Checks if x is odd print(str(x) + " is odd!!!")
Code/Introduction To Python/4 - Conditionals and Looping.ipynb
vbarua/PythonWorkshop
mit
In the above example the second condition is somewhat redundant because if x is not even it is odd. There's a better way of expressing this.
if x % 2 == 0: # Checks if x is even print(str(x) + " is even!!!") else: # x is odd print(str(x) + " is odd!!!")
Code/Introduction To Python/4 - Conditionals and Looping.ipynb
vbarua/PythonWorkshop
mit
You can have more than two conditions.
if x % 4 == 0: print("x divisible by 4") elif x % 3 == 0 and x % 5: print("x divisible by 3 and 5") elif x % 1 == 0: print("x divisible by 1") else: print("I give up")
Code/Introduction To Python/4 - Conditionals and Looping.ipynb
vbarua/PythonWorkshop
mit
Note how only the first conditional branch that matches gets executed. The else conditions is a catch all condition that would have executed if none of the other branches had been chosen. Bonus: Loop Comprehensions You can use the for keyword to generate new lists with data using list comprehensions.
r1 = [] for i in range(10): r1.append(i**2) # Equivalent to the loop above. r2 = [i**2 for i in range(10)] print(r1) print(r2) r1 = [] for i in range(30): if (i%2 == 0 and i%3 == 0): r1.append(i) # Equivalent to the loop above. r2 = [i for i in range(30) if i%2==0 and i%3==0] print(r1) print(r2)
Code/Introduction To Python/4 - Conditionals and Looping.ipynb
vbarua/PythonWorkshop
mit
Load the data As a first step we will load a large dataset using dask. If you have followed the setup instructions you will have downloaded a large CSV containing 12 million taxi trips. Let's load this data using dask to create a dataframe ddf:
ddf = dd.read_csv('../data/nyc_taxi.csv', parse_dates=['tpep_pickup_datetime']) ddf['hour'] = ddf.tpep_pickup_datetime.dt.hour # If your machine is low on RAM (<8GB) don't persist (though everything will be much slower) ddf = ddf.persist() print('%s Rows' % len(ddf)) print('Columns:', list(ddf.columns))
notebooks/07-working-with-large-datasets.ipynb
ioam/scipy-2017-holoviews-tutorial
bsd-3-clause
Create a dataset In previous sections we have already seen how to declare a set of Points from a pandas DataFrame. Here we do the same for a Dask dataframe passed in with the desired key dimensions:
points = hv.Points(ddf, kdims=['dropoff_x', 'dropoff_y'])
notebooks/07-working-with-large-datasets.ipynb
ioam/scipy-2017-holoviews-tutorial
bsd-3-clause