markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
At this point we can "compare" the output of the three different methods we used, again by using the zip function.
list(zip(tagged_text_baseline[:20], tagged_text_cltk[:20], tagged_text_nltk[:20])) for baseline_out, cltk_out, nltk_out in zip(tagged_text_baseline[:20], tagged_text_cltk[:20], tagged_text_nltk[:20]): print("Baseline: %s\nCLTK: %s\nNLTK: %s\n"%(baseline_out, cltk_out, nltk_out))
participants_notebooks/Sunoikisis - Named Entity Extraction 1b-LV.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
Excercise Extract the named entities from the English translation of the De Bello Gallico book 1. The CTS URN for this translation is urn:cts:latinLit:phi0448.phi001.perseus-eng2:1.
# Modify the code above to use the English model of the Stanford tagger instead of the italian one. Hint: stanford_model_english = "/opt/nlp/stanford-tools/stanford-ner-2015-12-09/classifiers/english.muc.7class.distsim.crf.ser.gz"
participants_notebooks/Sunoikisis - Named Entity Extraction 1b-LV.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
Exercise 2.2 Implement Backpropagation for an MLP in Numpy and train it Instantiate the feed-forward model class and optimization parameters. This models follows the architecture described in Algorithm 10.
# Model geometry = [corpus.nr_features, 20, 2] activation_functions = ['sigmoid', 'softmax'] # Optimization learning_rate = 0.05 num_epochs = 10 batch_size = 30 from lxmls.deep_learning.numpy_models.mlp import NumpyMLP model = NumpyMLP( geometry=geometry, activation_functions=activation_functions, learning_rate=learning_rate )
labs/notebooks/non_linear_classifiers/exercise_2.ipynb
LxMLS/lxmls-toolkit
mit
Milestone 1: Open the code for this model. This is located in lxmls/deep_learning/numpy_models/mlp.py Implement the method backpropagation() in the class NumpyMLP using Backpropagation recursion that we just saw. As a first step focus on getting the gradients of each layer, one at a time. Use the code below to plot the loss values for the study weight and perturbed versions.
from lxmls.deep_learning.mlp import get_mlp_parameter_handlers, get_mlp_loss_range # Get functions to get and set values of a particular weight of the model get_parameter, set_parameter = get_mlp_parameter_handlers( layer_index=1, is_bias=False, row=0, column=0 ) # Get batch of data batch = data.batches('train', batch_size=batch_size)[0] # Get loss and weight value current_loss = model.cross_entropy_loss(batch['input'], batch['output']) current_weight = get_parameter(model.parameters) # Get range of values of the weight and loss around current parameters values weight_range, loss_range = get_mlp_loss_range(model, get_parameter, set_parameter, batch)
labs/notebooks/non_linear_classifiers/exercise_2.ipynb
LxMLS/lxmls-toolkit
mit
Once you have implemented at least the gradient of the last layer. You can start checking if the values match
# Get the gradient value for that weight gradients = model.backpropagation(batch['input'], batch['output']) current_gradient = get_parameter(gradients)
labs/notebooks/non_linear_classifiers/exercise_2.ipynb
LxMLS/lxmls-toolkit
mit
Now you can plot the values of the loss around a given parameters value versus the gradient. If you have implemented this correctly the gradient should be tangent to the loss at the current weight value, see Figure 3.5. Once you have completed the exercise, you should be able to plot also the gradients of the other layers. Take into account that the gradients for the first layer will only be non zero for the indices of words present in the batch. You can locate this using.
# Use this to know the non-zero values of the input (that have non-zero gradient) batch['input'][0].nonzero()
labs/notebooks/non_linear_classifiers/exercise_2.ipynb
LxMLS/lxmls-toolkit
mit
Copy the following code for plotting
%matplotlib inline import matplotlib.pyplot as plt # Plot empirical plt.plot(weight_range, loss_range) plt.plot(current_weight, current_loss, 'xr') plt.ylabel('loss value') plt.xlabel('weight value') # Plot real h = plt.plot( weight_range, current_gradient*(weight_range - current_weight) + current_loss, 'r--' ) plt.show()
labs/notebooks/non_linear_classifiers/exercise_2.ipynb
LxMLS/lxmls-toolkit
mit
Milestone 2: After you have ensured that your Backpropagation algorithm is correct, you can train a model with the data we have.
# Get batch iterators for train and test train_batches = data.batches('train', batch_size=batch_size) test_set = data.batches('test', batch_size=None)[0] # Epoch loop for epoch in range(num_epochs): # Batch loop for batch in train_batches: model.update(input=batch['input'], output=batch['output']) # Prediction for this epoch hat_y = model.predict(input=test_set['input']) # Evaluation accuracy = 100*np.mean(hat_y == test_set['output']) # Inform user print("Epoch %d: accuracy %2.2f %%" % (epoch+1, accuracy))
labs/notebooks/non_linear_classifiers/exercise_2.ipynb
LxMLS/lxmls-toolkit
mit
Contents I. Create and minimize the molecule II. Select the torsional bond III. Rigid rotation scan IV. Relaxed rotation scan V. Plot the potential energy surfaces VI. Investigate conformational changes I. Create and minimize the molecule
mol = mdt.from_smiles('CCCC') mol.draw() mol.set_energy_model(mdt.models.GAFF) mol.energy_model.configure() minimization = mol.minimize(nsteps=40) minimization.draw()
moldesign/_notebooks/Example 5. Enthalpic barriers.ipynb
tkzeng/molecular-design-toolkit
apache-2.0
IV. Relaxed rotation scan Next, we'll get the right barrier (up to the accuracy of the energy model). Here, we'll rotate around the bond, but then perform a constrained minimization at each rotation point. This will allow all other degrees of freedom to relax, thus finding lower energies at each point along the path. Note: In order to break any spurious symmetries, this loop also adds a little bit of random noise to each structure before performing the minimization.
constraint = twist.constrain() relaxed = mdt.Trajectory(mol) for angle in angles: print angle,':', #add random noise to break symmetry mol.positions += np.random.random(mol.positions.shape) * 0.01*u.angstrom mol.positions -= mol.center_of_mass twist.value = angle constraint.value = angle t = mol.minimize(nsteps=100) relaxed.new_frame(annotation='angle: %s, energy: %s' % (twist.value.to(u.degrees), mol.potential_energy)) relaxed.draw()
moldesign/_notebooks/Example 5. Enthalpic barriers.ipynb
tkzeng/molecular-design-toolkit
apache-2.0
Au lieu d'imprimer l'objet numpy, on peut voir sa représentation directement
array1
Les matrices numpy.ipynb
aroberge/notebooks-fr
cc0-1.0
Créons une deuxième liste ainsi qu'une liste de listes
ma_liste2 = [10, 20, 30, 40, 50] mes_listes = [ma_liste, ma_liste2] print("mes listes: \n", mes_listes) print("-"*40) matrice = np.array(mes_listes) print("La matrice: \n", matrice) print("-"*40) print("Représentation de la matrice:") matrice # pour voir sa représentation
Les matrices numpy.ipynb
aroberge/notebooks-fr
cc0-1.0
Nous avons donc une matrice avec 2 lignes et 5 colonnes, dont les éléments sont des entiers.
print("shape = ", matrice.shape) # information sur nombre de lignes et colonnes print("type = ", matrice.dtype) # information sur le contenu
Les matrices numpy.ipynb
aroberge/notebooks-fr
cc0-1.0
On peut automatiquement créer des matrices particulières, soient remplies avec des zéros, des 1, ou la matrice identité (qui est une matrice carrée par définition.
print(np.zeros(4)) print('-'*40) print(np.zeros([3,3])) print('-'*40) print(np.ones([3,2])) print('-'*40) print(np.eye(4)) # matrice identité
Les matrices numpy.ipynb
aroberge/notebooks-fr
cc0-1.0
Numpy inclut une généralisation de la fonction "range" the Python, mais permettant des valeurs non-entières.
np.arange(1, 5, 0.2)
Les matrices numpy.ipynb
aroberge/notebooks-fr
cc0-1.0
Opérations élémentaires sur les matrices (arrays) numpy Par défaut, les opérations sont faites sur chaque élément de la matrice. Nous allons démontrer ceci en créant deux matrices de la même taille (même nombre de lignes et de colonnes) et en faisant diverses opérations. Nous commençons avec des opérations sur une seule matrice.
mat1 = np.array([ [1, 2, 3], [4, 5, 6]]) print(3 * mat1) print("-"*40) print(mat1 / 2) print("-"*40) print(1 / mat1) print("-"*40) print(mat1 % 3) print("-"*40) print(mat1 + 20) print("-"*40)
Les matrices numpy.ipynb
aroberge/notebooks-fr
cc0-1.0
Nous considérons maintenant des opérations avec deux matrices.
mat2 = np.array([ [11, 12, 13], [14, 15, 16]]) print(mat1 + mat2) print("-"*40) print(mat2 - mat1) print("-"*40) print(mat1 * mat1) # IMPORTANT: ceci n'est PAS la multiplication normale de matrices
Les matrices numpy.ipynb
aroberge/notebooks-fr
cc0-1.0
Si on essaie de faire de telles opérations sur des matrices de taille différente, cela ne fonctionne pas.
mat3 = np.array([[1, 2], [3, 4]]) mat3 * mat1
Les matrices numpy.ipynb
aroberge/notebooks-fr
cc0-1.0
Estimativa de $\lambda$ em cada iteração ímpar: Abaixo, número de iterações necessárias, valor próprio encontrado ao final, vector próprio associado, valor próprio do passo anterior e seu vector próprio, e diferença entre último e penúltimo vectores próprios encontrados.
npower_eps(A, 1e-5)
Física Computacional/Ficha 6.ipynb
risantos/schoolwork
mit
Cálculo utilizando o método sem critério de parada, mas em vez disso, em função do k Retornando: + Iteração + Vector Normalizado + Valor próprio
npower(A, 6) npower(A, 12)
Física Computacional/Ficha 6.ipynb
risantos/schoolwork
mit
Cálculo de Valores e Vectores Próprios utilizando uma função interna do numpy
linalg.eig(A)
Física Computacional/Ficha 6.ipynb
risantos/schoolwork
mit
Exercício 2 - Método de Jacobi Cíclico
def jacobi(A_i, eps, nitmax): A = np.copy(A_i) # para cortar dependências m = len(A) iteration = 0 Q = np.identity(m) def off(mat): off_sum = 0 for i in range(m): for j in range(m): if j != i: off_sum += mat[i, j]**2 return np.sqrt(off_sum) def frobenius_norm(mat): norm = 0 for i in range(m): for j in range(m): norm += mat[i, j]**2 return np.sqrt(norm) while (off(A) > eps and iteration < nitmax): j, k, ajk = 0, 0, 0. for ji in range(m-1): for ki in range(ji+1, m): absjk = abs(A[ji, ki]) if absjk >= ajk: j, k, ajk = ji, ki, absjk def CSjk(mati, j, k): mat = np.copy(mati) if mat[j, j] == mat[k, k]: C, S = np.cos(np.pi / 4), np.sin(np.pi / 4) else: tau = 2*mat[j, k] / (mat[k, k] - mat[j, j]) chi = 1 / np.sqrt(1 + tau**2) C = np.sqrt((1 + chi) / 2) S = np.sign(tau) * np.sqrt((1 - chi) / 2) return C, S C, S = CSjk(A, j, k) A_l = np.zeros_like(A) for r in range(m): if r != j and r != k: A_l[r, j] = C * A[r, j] - S * A[r, k] A_l[j, r] = C * A[r, j] - S * A[r, k] A_l[r, k] = S * A[r, j] + C * A[r, k] A_l[k, r] = S * A[r, j] + C * A[r, k] for s in range(m): if s != j and s != k: A_l[r, s] = np.copy(A[r, s]) A_l[j, j] = np.copy((C**2 * A[j, j]) + (S**2 * A[k, k]) - (2 * S * C * A[j, k])) A_l[j, k] = S * C * (A[j, j] - A[k, k]) + ((C**2 - S**2) * A[j, k]) # A_l[j, k] = 0 A_l[k, j] = np.copy(A_l[j, k]) A_l[k, k] = np.copy((S**2 * A[j, j]) + (C**2 * A[k, k]) + (2 * S * C * A[j, k])) A = A_l Q_l = np.zeros_like(Q) for r in range(m): for s in range(m): if s != j and s!= k: Q_l[r, s] = np.copy(Q[r, s]) Q_l[r, j] = C * Q[r, j] - S * Q[r, k] Q_l[r, k] = S * Q[r, j] + C * Q[r, k] Q = Q_l iteration += 1 D = Q.transpose().dot(A_i).dot(Q) return A, off(A), D, Q, iteration, off(A_i), frobenius_norm(A_i) a, oa, d, q, it, oi, fi = jacobi(A, 1e-4, 100) print('D:') print( d) print('Q:') print(q) print('Iterações:', it) oi**2 / fi**2 oa**2 / fi**2 linalg.eig(A) oi**2 / 8.625 oa**2 / 8.625 np.cos(np.pi/4) df_5pts = lambda f, x, h: (-3*f(x+4*h) + 16*f(x+3*h) - 36*f(x+2*h) + 48*f(x+h) - 25*f(x)) / (12*h) df2_5pts = lambda f, x, h: df_5pts(df_5pts, x, h) def ui(x): if def schrodingerpvp(xmin, xmax, subint, eps): h = (xmax - xmin) / subint xi = [xmin + i for i in range(subint)] # ui = [lambda x: u(x) for x in xi] D = np.zeros((subint-1, subint-1)) for i in range(subint-2): D[i, i] = (2 / h**2) + xi[i]**2 if i==0: D[i, 1] = - 1 / h**2 elif i== subint-2: D[i, i-1] = - 1 / h**2 else: D[i, i+1] = -1 / h**2 D[i, i-1] = -1 / h**2 Dt, odt, lam, vec, it, od, frobd = jacobi(D, eps, 100000) return lam, vec, D l, v, D = schrodingerpvp(-10, 10, 50, 1e-4) linalg.eigvals(D)
Física Computacional/Ficha 6.ipynb
risantos/schoolwork
mit
The first step is to create and object using the SampleSize class with the parameter of interest, the sample size calculation method, and the stratification status. In this example, we want to calculate sample size for proportions, using wald method for a stratified design. This is achived with the following snippet of code. python SampleSize( parameter="proportion", method="wald", stratification=True ) Because, we are using a stratified sample design, it is best to specify the expected coverage levels by stratum. If the information is not available then aggregated values can be used across the strata. The 2017 Senegal DHS published the coverage rates by region hence we have the information available by stratum. To provide the informmation to Samplics we use the python dictionaries as follows python expected_coverage = { "Dakar": 0.849, "Ziguinchor": 0.809, "Diourbel": 0.682, "Saint-Louis": 0.806, "Tambacounda": 0.470, "Kaolack": 0.797, "Thies": 0.834, "Louga": 0.678, "Fatick": 0.766, "Kolda": 0.637, "Matam": 0.687, "Kaffrine": 0.766, "Kedougou": 0.336, "Sedhiou": 0.742, } Now, we want to calculate the sample size with desired precision of 0.07 which means that we want the expected vaccination coverage rates to have 7% half confidence intervals e.g. expected rate of 90% will have a confidence interval of [83%, 97%]. Note that the desired precision can be specified by stratum in a similar way as the target coverage using a python dictionary. Given that information, we can calculate the sample size using SampleSize class as follows.
# target coverage rates expected_coverage = { "Dakar": 0.849, "Ziguinchor": 0.809, "Diourbel": 0.682, "Saint-Louis": 0.806, "Tambacounda": 0.470, "Kaolack": 0.797, "Thies": 0.834, "Louga": 0.678, "Fatick": 0.766, "Kolda": 0.637, "Matam": 0.687, "Kaffrine": 0.766, "Kedougou": 0.336, "Sedhiou": 0.742, } # Declare the sample size calculation parameters sen_vaccine_wald = SampleSize( parameter="proportion", method="wald", stratification=True ) # calculate the sample size sen_vaccine_wald.calculate(target=expected_coverage, precision=0.07) # show the calculated sample size sen_vaccine_wald.samp_size
docs/source/_build/html/tutorial/sample_size_calculation.ipynb
survey-methods/samplics
mit
SampleSize calculates the sample sizes and store the in teh samp_size attributes which is a python dictinary object. If a dataframe is better suited for the use case, the method to_dataframe() can be used to create a pandas dataframe.
sen_vaccine_wald_size = sen_vaccine_wald.to_dataframe() sen_vaccine_wald_size
docs/source/_build/html/tutorial/sample_size_calculation.ipynb
survey-methods/samplics
mit
The sample size calculation above assumes that the design effect (DEFF) was equal to 1. A design effect of 1 correspond to sampling design with a variance equivalent to a simple random selection of same sample size. In the context of complex sampling designs, DEFF is often different from 1. Stage sampling and unequal weights usually increase the design effect above 1. The 2017 Senegal DHS indicated a design effect equal to 1.963 (1.401^2) for basic vaccination. Hence, to calculate the sample size, we will use the design effect provided by DHS.
sen_vaccine_wald.calculate(target=expected_coverage, precision=0.07, deff=1.401 ** 2) sen_vaccine_wald.to_dataframe()
docs/source/_build/html/tutorial/sample_size_calculation.ipynb
survey-methods/samplics
mit
Since the sample design is stratified, the sample size calculation will be more precised if DEFF is specified at the stratum level which is available from the 2017 Senegal DHS provided report. Some regions have a design effect below 1. To be conservative with our sample size calculation, we will use 1.21 as the minimum design effect to use in the sample size calculation.
# Target coverage rates expected_deff = { "Dakar": 1.100 ** 2, "Ziguinchor": 1.100 ** 2, "Diourbel": 1.346 ** 2, "Saint-Louis": 1.484 ** 2, "Tambacounda": 1.366 ** 2, "Kaolack": 1.360 ** 2, "Thies": 1.109 ** 2, "Louga": 1.902 ** 2, "Fatick": 1.100 ** 2, "Kolda": 1.217 ** 2, "Matam": 1.403 ** 2, "Kaffrine": 1.256 ** 2, "Kedougou": 2.280 ** 2, "Sedhiou": 1.335 ** 2, } # Calculate sample sizes using deff at the stratum level sen_vaccine_wald.calculate(target=expected_coverage, precision=0.07, deff=expected_deff) # Convert sample sizes to a dataframe sen_vaccine_wald.to_dataframe()
docs/source/_build/html/tutorial/sample_size_calculation.ipynb
survey-methods/samplics
mit
The sample size calculation above does not account for attrition of sample sizes due to non-response. In the 2017 Semegal DHS, the overal household and women reponse rate was abou 94.2%.
# Calculate sample sizes with a resp_rate of 94.2% sen_vaccine_wald.calculate( target=expected_coverage, precision=0.07, deff=expected_deff, resp_rate=0.942 ) # Convert sample sizes to a dataframe sen_vaccine_wald.to_dataframe( col_names=["region", "vaccine_coverage", "precision", "number_12_23_months"] )
docs/source/_build/html/tutorial/sample_size_calculation.ipynb
survey-methods/samplics
mit
Fleiss method The World Health OR=rganization (WHO) recommends using the Fleiss method for calculating sample size for vaccination coverage survey (see https://www.who.int/immunization/documents/who_ivb_18.09/en/). To use the Fleiss method, the examples shown above are the same with method="fleiss".
sen_vaccine_fleiss = SampleSize( parameter="proportion", method="fleiss", stratification=True ) sen_vaccine_fleiss.calculate( target=expected_coverage, precision=0.07, deff=expected_deff, resp_rate=0.942 ) sen_vaccine_sample = sen_vaccine_fleiss.to_dataframe( col_names=["region", "vaccine_coverage", "precision", "number_12_23_months"] ) sen_vaccine_sample
docs/source/_build/html/tutorial/sample_size_calculation.ipynb
survey-methods/samplics
mit
At this point, we have the number of 12-23 months needed to achieve the desired precision given the expected proportions using wald or fleiss calculation methods. Number of households To obtain the number of households, we need to know the expected average number of children aged 12-23 months per household. This information can be obtained from census data or from surveys' rosters. Since, the design is stratified, it is best to obtain the information per stratum. In this example, we wil assume that 5.2% of the population is between 12 and 23 months of age and apply that to all strata and household. Hence, the minimum number of households to select is:
sen_vaccine_sample["number_households"] = round( sen_vaccine_sample["number_12_23_months"] / 0.052, 0 ) sen_vaccine_sample
docs/source/_build/html/tutorial/sample_size_calculation.ipynb
survey-methods/samplics
mit
Finding the knee from figure 2 from the paper
def figure2(): x = np.linspace(0.0, 1, 10) with np.errstate(divide='ignore'): return x,np.true_divide(-1, x + 0.1) + 5
notebooks/kneedle_algorithm.ipynb
arvkevi/kneed
bsd-3-clause
Step 0: Raw input
x,y = figure2() if not np.array_equal(np.array(x), np.sort(x)): raise ValueError('x needs to be sorted')
notebooks/kneedle_algorithm.ipynb
arvkevi/kneed
bsd-3-clause
Step 1: Fit a spline
from scipy.interpolate import interp1d N = len(x) # Ds = the finite set of x- and y-values that define a smooth curve, # one that has been fit to a smoothing spline. uspline = interp1d(x, y) Ds_y = uspline(x) plt.plot(x, Ds_y);
notebooks/kneedle_algorithm.ipynb
arvkevi/kneed
bsd-3-clause
Step 2: Normalize the spline
def normalize(a): """return the normalized input array""" return (a - min(a)) / (max(a) - min(a)) # x and y normalized to unit square x_normalized = normalize(x) y_normalized = normalize(Ds_y)
notebooks/kneedle_algorithm.ipynb
arvkevi/kneed
bsd-3-clause
Step 3: Calculate the difference curve
# the difference curve y_difference = y_normalized - x_normalized x_difference = x_normalized.copy() plt.title("Normalized spline & difference curve"); plt.plot(x_normalized, y_normalized); plt.plot(x_difference, y_difference);
notebooks/kneedle_algorithm.ipynb
arvkevi/kneed
bsd-3-clause
Step 4: Identify local maxima of the difference curve
from scipy.signal import argrelextrema # local maxima for knees maxima_indices = argrelextrema(y_difference, np.greater)[0] x_difference_maxima = x_difference[maxima_indices] y_difference_maxima = y_difference[maxima_indices] # local minima minima_indices = argrelextrema(y_difference, np.less)[0] x_difference_minima = x_difference[minima_indices] y_difference_minima = y_difference[minima_indices] plt.title("local maxima in difference curve"); plt.plot(x_normalized, y_normalized); plt.plot(x_difference, y_difference); plt.hlines(y_difference_maxima, plt.xlim()[0], plt.xlim()[1]);
notebooks/kneedle_algorithm.ipynb
arvkevi/kneed
bsd-3-clause
Step 5: Calculate thresholds
# Sensitivity parameter S # smaller values detect knees quicker S = 1.0 Tmx = y_difference_maxima - (S * np.abs(np.diff(x_normalized).mean()))
notebooks/kneedle_algorithm.ipynb
arvkevi/kneed
bsd-3-clause
Step 6: knee finding algorithm If any difference value (xdj, ydj), where j > i, drops below the threshold y = T|mxi for (x|mxi, y|mxi) before the next local maximum in the difference curve is reached, Kneedle declares a knee at the x-value of the corresponding local maximum x = x|xi. If the difference values reach a local minimum and starts to increase before y = T|mxi is reached, we reset the threshold value to 0 and wait for another local maximum to be reached.
# artificially place a local max at the last item in the x_difference array maxima_indices = np.append(maxima_indices, len(x_difference) - 1) minima_indices = np.append(minima_indices, len(x_difference) - 1) # placeholder for which threshold region i is located in. maxima_threshold_index = 0 minima_threshold_index = 0 curve = 'concave' direction = 'increasing' all_knees = set() all_norm_knees = set() # traverse the difference curve for idx, i in enumerate(x_difference): # reached the end of the curve if i == 1.0: break # values in difference curve are at or after a local maximum if idx >= maxima_indices[maxima_threshold_index]: threshold = Tmx[maxima_threshold_index] threshold_index = idx maxima_threshold_index += 1 # values in difference curve are at or after a local minimum if idx >= minima_indices[minima_threshold_index]: threshold = 0.0 minima_threshold_index += 1 # Do not evaluate values in the difference curve before the first local maximum. if idx < maxima_indices[0]: continue # evaluate the threshold if y_difference[idx] < threshold: if curve == 'convex': if direction == 'decreasing': knee = x[threshold_index] all_knees.add(knee) norm_knee = x_normalized[threshold_index] all_norm_knees.add(norm_knee) else: knee = x[-(threshold_index + 1)] all_knees.add(knee) norm_knee = x_normalized[-(threshold_index + 1)] all_norm_knees.add(norm_knee) elif curve == 'concave': if direction == 'decreasing': knee = x[-(threshold_index + 1)] all_knees.add(knee) norm_knee = x_normalized[-(threshold_index + 1)] all_norm_knees.add(norm_knee) else: knee = x[threshold_index] all_knees.add(knee) norm_knee = x_normalized[threshold_index] all_norm_knees.add(norm_knee) plt.xticks(np.arange(0,1.1,0.1)) plt.plot(x_normalized, y_normalized); plt.plot(x_difference, y_difference); plt.hlines(Tmx[0], plt.xlim()[0], plt.xlim()[1], colors='g', linestyles='dashed'); plt.vlines(x_difference_maxima, plt.ylim()[0], plt.ylim()[1], colors='r', linestyles='dashed');
notebooks/kneedle_algorithm.ipynb
arvkevi/kneed
bsd-3-clause
The vertical, red dashed line represents the x value of the knee point. The horizontal greeb dashed line represents the threshold value.
knee # normalized x value where the knee was determined norm_knee
notebooks/kneedle_algorithm.ipynb
arvkevi/kneed
bsd-3-clause
There is a document in the DCC which give the ZPK filter for the noisemon circuit: LIGO-T1100378 There are four zeros at 0, four poles at 5 Hz, and two poles at 4000 Hz. I just eyeballed the gain by matching the ASDs of the filtered drive signal and the noisemon. Passing the drive signal (e.g. ETMY_L1_MASTER_LR) through this filter, you get the predicted noisemon signal. The noisemon at th L1 stage is mostly limited by noise, but you can see things like the calibration lines stick up. Subtracting the filtered drive signal from the noisemon gives the voltage going to the coil which was not 'requested' by the drive/feedback.
def project_out_drive(glitch_time, dur=64, level='L1', quadrant='LR', gain=3.5e10): pad = 8 gt = int(round(glitch_time)) st=gt-dur/2 et=gt+dur drive = TimeSeries.fetch('L1:SUS-ETMY_%s_MASTER_OUT_%s_DQ' % (level, quadrant), st-pad, et) nmon = TimeSeries.fetch('L1:SUS-ETMY_%s_NOISEMON_%s_OUT_DQ' % (level, quadrant), st, et) proj = drive.zpk([0,0,0,0], [5,5,5,5,4e3,4e3], gain).crop(st,et) return nmon, proj, drive.crop(st, et) glitch = 1183077855.74 nmon, proj, drive = project_out_drive(1183077600, dur=300, quadrant='UR') p1 = nmon.asd(16, 8).plot(label='Noisemon') p1.gca().plot(proj.asd(16, 8),label='Projection') p1.gca().plot((nmon-proj).asd(16, 8),label='Subtraction') p1.gca().legend(loc='lower left') p1.gca().set_ylim(1e-2,10) p1.gca().set_xlim(10,30)
Notebooks/Subtraction/etmy_l1.ipynb
andrew-lundgren/detchar
gpl-3.0
The subtraction works pretty well. I adjusted the gain until the calibration lines were as low as I could get them (because they're the strongest feature and are well above the noise). The subtraction only seems to be good to about 1 part in 10. Probably the noisemon has some extra phase or the zeros/poles are not exact. This should be good enough for our purposes. Now we go through a few loud glitches and do this subtraction. In each case, the loud glitch in DARM, which sent a large feedback to the L1 coil, is easily visible in the noisemon. But after subtraction, there is almost nothing left in the noisemon. This means that the voltage sent to the coil (which the noisemon witnesses) is exactly what was requested of the DAC. There's no glitch coming from the coil electronics. Note: I've also checked this for the other three quadrants. The same gain work for all of them.
def plot_glitch(glitch, quadrant='LR'): nmon, proj, drive = project_out_drive(glitch, quadrant=quadrant) p1 = nmon.highpass(8).crop(glitch-1, glitch+1).plot(label='Original') p1.gca().plot((nmon-proj).highpass(8).crop(glitch-1, glitch+1),label='Drive removed') p1.gca().legend() plot_glitch(1183061141.71) plot_glitch(1183077855.74) plot_glitch(1183749148.39) plot_glitch(1184624917.96) plot_glitch(1184972637.76)
Notebooks/Subtraction/etmy_l1.ipynb
andrew-lundgren/detchar
gpl-3.0
Part B The lecture on statistics mentions latent variables, specifically how you cannot know what the underlying process is that's generating your data; all you have is the data, on which you have to impose certain assumptions in order to derive hypotheses about what generated the data in the first place. To illustrate this, the code provided below generates sample data from distributions with mean and variance that are typically not known to you. Put another way, pretend you cannot see the mean (loc) and variance (scale) in the code that generates these samples; all you usually can see are the data samples themselves. You'll use the numpy.mean and variance function you wrote in Part A to compute the statistics on the sample data itself and observe how these statistics change. In the space provided, compute and print the mean and variance of each of the three samples: - sample1 - sample2 - sample3 You can just print() them out in the space provided. Don't modify anything above where it says "DON'T MODIFY".
import numpy as np np.random.seed(5735636) sample1 = np.random.normal(loc = 10, scale = 5, size = 10) sample2 = np.random.normal(loc = 10, scale = 5, size = 1000) sample3 = np.random.normal(loc = 10, scale = 5, size = 1000000) ######################### # DON'T MODIFY ANYTHING # # ABOVE THIS BLOCK # ######################### ### BEGIN SOLUTION ### END SOLUTION
assignments/A6/A6_Q3.ipynb
eds-uga/csci1360e-su17
mit
If CPLEX is not installed, install CPLEX Community edition.
try: import cplex except: raise Exception('Please install CPLEX. See https://pypi.org/project/cplex/')
examples/mp/jupyter/sports_scheduling.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Step 2: Model the data In this scenario, the data is simple. There are eight teams in each division, and the teams must play each team in the division once and each team outside the division once. Use a Python module, Collections, which implements some data structures that will help solve some problems. Named tuples helps to define meaning of each position in a tuple. This helps the code be more readable and self-documenting. You can use named tuples in any place where you use tuples. In this example, you create a namedtuple to contain information for points. You are also defining some of the parameters.
# Teams in 1st division team_div1 = ["Baltimore Ravens","Cincinnati Bengals", "Cleveland Browns","Pittsburgh Steelers","Houston Texans", "Indianapolis Colts","Jacksonville Jaguars","Tennessee Titans","Buffalo Bills","Miami Dolphins", "New England Patriots","New York Jets","Denver Broncos","Kansas City Chiefs","Oakland Raiders", "San Diego Chargers"] # Teams in 2nd division team_div2 = ["Chicago Bears","Detroit Lions","Green Bay Packers","Minnesota Vikings","Atlanta Falcons", "Carolina Panthers","New Orleans Saints","Tampa Bay Buccaneers","Dallas Cowboys","New York Giants", "Philadelphia Eagles","Washington Redskins","Arizona Cardinals","San Francisco 49ers", "Seattle Seahawks","St. Louis Rams"] #number_of_matches_to_play = 1 # Number of match to play between two teams on the league # Schedule parameters nb_teams_in_division = 5 max_teams_in_division = 10 number_of_matches_inside_division = 1 number_of_matches_outside_division = 1
examples/mp/jupyter/sports_scheduling.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Now you will import the pandas library. Pandas is an open source Python library for data analysis. It uses two data structures, Series and DataFrame, which are built on top of NumPy. A Series is a one-dimensional object similar to an array, list, or column in a table. It will assign a labeled index to each item in the series. By default, each item receives an index label from 0 to N, where N is the length of the series minus one. A DataFrame is a tabular data structure comprised of rows and columns, similar to a spreadsheet, database table, or R's data.frame object. Think of a DataFrame as a group of Series objects that share an index (the column names). In the example, each division (the AFC and the NFC) is part of a DataFrame.
import pandas as pd team1 = pd.DataFrame(team_div1) team2 = pd.DataFrame(team_div2) team1.columns = ["AFC"] team2.columns = ["NFC"] teams = pd.concat([team1,team2], axis=1)
examples/mp/jupyter/sports_scheduling.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Step 3: Prepare the data Given the number of teams in each division and the number of intradivisional and interdivisional games to be played, you can calculate the total number of teams and the number of weeks in the schedule, assuming every team plays exactly one game per week. The season is split into halves, and the number of the intradivisional games that each team must play in the first half of the season is calculated.
import numpy as np nb_teams = 2 * nb_teams_in_division teams = range(nb_teams) # Calculate the number of weeks necessary nb_inside_div = (nb_teams_in_division - 1) * number_of_matches_inside_division nb_outside_div = nb_teams_in_division * number_of_matches_outside_division nb_weeks = nb_inside_div + nb_outside_div # Weeks to schedule weeks = range(nb_weeks) # Season is split into two halves first_half_weeks = range(int(np.floor(nb_weeks / 2))) nb_first_half_games = int(np.floor(nb_weeks / 3)) from collections import namedtuple match = namedtuple("match",["team1","team2","is_divisional"]) matches = {match(t1,t2, 1 if ( t2 <= nb_teams_in_division or t1 > nb_teams_in_division) else 0) for t1 in teams for t2 in teams if t1 < t2}
examples/mp/jupyter/sports_scheduling.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Number of games to play between pairs depends on whether the pairing is intradivisional or not.
nb_play = { m : number_of_matches_inside_division if m.is_divisional==1 else number_of_matches_outside_division for m in matches}
examples/mp/jupyter/sports_scheduling.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Step 4: Set up the prescriptive model Create the DOcplex model The model contains all the business constraints and defines the objective.
from docplex.mp.model import Model mdl = Model("sports")
examples/mp/jupyter/sports_scheduling.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Define the decision variables
plays = mdl.binary_var_matrix(matches, weeks, lambda ij: "x_%s_%d" %(str(ij[0]), ij[1]))
examples/mp/jupyter/sports_scheduling.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Express the business constraints Each pair of teams must play the correct number of games.
mdl.add_constraints( mdl.sum(plays[m,w] for w in weeks) == nb_play[m] for m in matches) mdl.print_information()
examples/mp/jupyter/sports_scheduling.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Each team must play exactly once in a week.
mdl.add_constraints( mdl.sum(plays[m,w] for m in matches if (m.team1 == t or m.team2 == t) ) == 1 for w in weeks for t in teams) mdl.print_information()
examples/mp/jupyter/sports_scheduling.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Games between the same teams cannot be on successive weeks.
mdl.add_constraints( plays[m,w] + plays[m,w+1] <= 1 for w in weeks for m in matches if w < nb_weeks-1) mdl.print_information()
examples/mp/jupyter/sports_scheduling.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Some intradivisional games should be in the first half.
mdl.add_constraints( mdl.sum(plays[m,w] for w in first_half_weeks for m in matches if (((m.team1 == t or m.team2 == t) and m.is_divisional == 1 ))) >= nb_first_half_games for t in teams) mdl.print_information()
examples/mp/jupyter/sports_scheduling.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Express the objective The objective function for this example is designed to force intradivisional games to occur as late in the season as possible. The incentive for intradivisional games increases by week. There is no incentive for interdivisional games.
gain = { w : w*w for w in weeks} # If an intradivisional pair plays in week w, Gain[w] is added to the objective. mdl.maximize( mdl.sum (m.is_divisional * gain[w] * plays[m,w] for m in matches for w in weeks) )
examples/mp/jupyter/sports_scheduling.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Solve with Decision Optimization You will get the best solution found after n seconds, due to a time limit parameter.
mdl.print_information() assert mdl.solve(), "!!! Solve of the model fails" mdl.report()
examples/mp/jupyter/sports_scheduling.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Step 5: Investigate the solution and then run an example analysis Determine which of the scheduled games will be a replay of one of the last 10 Super Bowls.<br> We start by creating a pandas DataFrame that contains the year and teams who played the last 10 Super Bowls.
try: # Python 2 team_league = dict({t : team_div1[t] for t in range(nb_teams_in_division) }.items() + \ {t+nb_teams_in_division : team_div2[t] for t in range(nb_teams_in_division) }.items() ) except: # Python 3 team_league = dict(list({t : team_div1[t] for t in range(nb_teams_in_division) }.items()) + \ list({t+nb_teams_in_division : team_div2[t] for t in range(nb_teams_in_division) }.items())) sol = namedtuple("solution",["week","is_divisional", "team1", "team2"]) solution = [sol(w, m.is_divisional, team_league[m.team1], team_league[m.team2]) for m in matches for w in weeks if plays[m,w].solution_value == 1] nfl_finals = [("2016", "Carolina Panthers", "Denver Broncos"), ("2015", "New England Patriots", "Seattle Seahawks"), ("2014", "Seattle Seahawks", "Denver Broncos"), ("2013", "Baltimore Ravens", "San Francisco 49ers"), ("2012", "New York Giants", "New England Patriots "), ("2011", "Green Bay Packers", "Pittsburgh Steelers"), ("2010", "New Orleans Saints", "Indianapolis Colts"), ("2009", "Pittsburgh Steelers", "Arizona Cardinals"), ("2008", "New York Giants", "New England Patriots"), ("2007", "Indianapolis Colts", "Chicago Bears") ] nfl_meetings = {(t[1], t[2]) for t in nfl_finals} winners_bd = pd.DataFrame(nfl_finals) winners_bd.columns = ["year", "team1", "team2"] display(winners_bd)
examples/mp/jupyter/sports_scheduling.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
We now look for the games in our solution that are replays of one of the past 10 Super Bowls.
months = ["January", "February", "March", "April", "May", "June", "July", "August", "September", "October", "November", "December"] report = [] for m in solution: if (m.team1, m.team2) in nfl_meetings: report.append((m.week, months[m.week//4], m.team1, m.team2)) if (m.team2, m.team1) in nfl_meetings: report.append((m.week, months[m.week//4], m.team2, m.team1)) print(report) matches_bd = pd.DataFrame(report) matches_bd.columns = ["week", "Month", "Team1", "Team2"] try: #pandas >= 0.17 display(matches_bd.sort_values(by='week')) except: display(matches_bd.sort('week'))
examples/mp/jupyter/sports_scheduling.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Problem definition The rampy.centroid function will calculate the centroid of the signal you provide to it. In this case, we have a combination of two Gaussian peaks with some noise. This example is that used in the Machine Learning Regression notebook. The example signals $D_{i,j}$ are generated from a linear combination of two Gaussian peaks $S_{k,j}$, and are affected by a constant background $\epsilon_{i,j}$: $$ D_{i,j} = C_{i,k} \times S_{k,j} + \epsilon_{i,j}$$ We thus will remove the background, then calculate the centroid, and plot it against $C_{i,k}$ which is known in the present case.
x = np.arange(0,600,1.0) nb_samples = 100 # number of samples in our dataset # partial spectra S_1 = scipy.stats.norm.pdf(x,loc=300.,scale=40.) S_2 = scipy.stats.norm.pdf(x,loc=400,scale=20) S_true = np.vstack((S_1,S_2)) print("Number of samples:"+str(nb_samples)) print("Shape of partial spectra matrix:"+str(S_true.shape)) # concentrations C_ = np.random.rand(nb_samples) #60 samples with random concentrations between 0 and 1 C_true = np.vstack((C_,(1-C_))).T print("Shape of concentration matrix:"+str(C_true.shape)) # background E_ = 1e-8*x**2 true_sig = np.dot(C_true,S_true) Obs = np.dot(C_true,S_true) + E_ + np.random.randn(nb_samples,len(x))*1e-4 # norm is a class which, when called, can normalize data into the # [0.0, 1.0] interval. norm = matplotlib.colors.Normalize( vmin=np.min(C_), vmax=np.max(C_)) # choose a colormap c_m = matplotlib.cm.jet # create a ScalarMappable and initialize a data structure s_m = matplotlib.cm.ScalarMappable(cmap=c_m, norm=norm) s_m.set_array([]) # plotting spectra # calling the ScalarMappable that was initialised with c_m and norm for i in range(C_.shape[0]): plt.plot(x, Obs[i,:].T, color=s_m.to_rgba(C_[i])) # we plot the colorbar, using again our # ScalarMappable c_bar = plt.colorbar(s_m) c_bar.set_label(r"C_") plt.xlabel('X') plt.ylabel('Y') plt.show()
examples/Baseline_and_Centroid_determination.ipynb
charlesll/RamPy
gpl-2.0
Baseline fit We will use the rampy.baseline function with a polynomial form. We first create the array to store baseline-corrected spectra
Obs_corr = np.ones(Obs.shape) print(Obs_corr.shape)
examples/Baseline_and_Centroid_determination.ipynb
charlesll/RamPy
gpl-2.0
We define regions of interest ROI where the baseline will fit the signals. From the previous figure, this is clear that it should be between 0 and 100, and 500 and 600.
ROI = np.array([[0.,100.],[500.,600.]]) print(ROI)
examples/Baseline_and_Centroid_determination.ipynb
charlesll/RamPy
gpl-2.0
Then we loop to save the baseline corrected data in this array.
for i in range(nb_samples): sig_corr, bas_, = rp.baseline(x,Obs[i,:].T,ROI,method="poly",polynomial_order=2) Obs_corr[i,:] = sig_corr.reshape(1,-1) # plotting spectra # calling the ScalarMappable that was initialised with c_m and norm plt.figure(figsize=(8,4)) plt.subplot(1,2,1) for i in range(C_.shape[0]): plt.plot(x, Obs[i,:].T, color=s_m.to_rgba(C_[i]), alpha=0.3) plt.plot(x,bas_,"k-",linewidth=4.0,label="baseline") # we plot the colorbar, using again our # ScalarMappable c_bar = plt.colorbar(s_m) c_bar.set_label(r"C_") plt.xlabel('X') plt.ylabel('Y') plt.legend() plt.title("A) Baseline fit") plt.subplot(1,2,2) for i in range(C_.shape[0]): plt.plot(x, Obs_corr[i,:].T, color=s_m.to_rgba(C_[i]), alpha=0.3) c_bar = plt.colorbar(s_m) c_bar.set_label(r"C_") plt.xlabel('X') plt.ylabel('Y') plt.title("B) Corrected spectra") plt.show() plt.tight_layout()
examples/Baseline_and_Centroid_determination.ipynb
charlesll/RamPy
gpl-2.0
Centroid determination Now we can calculate the centroid of the signal. rampy.centroid calculates it as centroid = np.sum(y_/np.sum(y_)*x) It accepts arrays of spectrum, organised as n points by m samples. Smoothing can be done if wanted, by indicating smoothing = True. We will compare both in the following code. A tweak is to prepare an array fo x with the same shape as y, and the good x values in each columns. Furthermore, do not forget that arrays should be provided as n points by m samples. So use .T if needed to transpose your array. We need it below!
x_array = np.ones((len(x),nb_samples)) for i in range(nb_samples): x_array[:,i] = x centroids_no_smooth = rp.centroid(x_array,Obs_corr.T) centroids_smooth = rp.centroid(x_array,Obs_corr.T,smoothing=True) centroids_true_sig = rp.centroid(x_array,true_sig.T,smoothing=True)
examples/Baseline_and_Centroid_determination.ipynb
charlesll/RamPy
gpl-2.0
Now we can plot the centroids against the chemical ratio C_ for instance.
plt.figure() plt.plot(C_,centroids_true_sig,"r-",markersize=3.,label="true values") plt.plot(C_,centroids_no_smooth,"k.",markersize=5., label="non-smoothed centroids") plt.plot(C_,centroids_smooth,"b+",markersize=3., label="smoothed centroids") plt.xlabel("Fraction C_") plt.ylabel("Signal centroid") plt.legend()
examples/Baseline_and_Centroid_determination.ipynb
charlesll/RamPy
gpl-2.0
Since we're dealing with text, we need to turn the characters into numbers in order to perform our calculations on them. We do this in two steps: first we get the sparse (one-hot encoded) representation of each character and then we learn a dense representation (so-called embeddings) as part of our model training. Sparse representation: one-hot encoding Our sparse representation will consist of sparse vectors of dimension n_chars, which in our case is 129 (128 ascii chars + 1 end-of-sequence char). The feature vector for a single character will thus be of the form: $\qquad x(\text{char})\ =\ (0, 0, 1, 0, \dots, 0)$ Or equivalently in components, $\qquad x_i(\text{char})\ =\ \left{\begin{matrix}1&\text{if } i = h(\text{char})\0&\text{otherwise}\end{matrix}\right.$ where $h$ is a function that maps a character to an integer (e.g. a hash function). In our case, we use the build-in function ord: python In [1]: ord('H') Out[1]: 72 As it turns out, we don't actually need to construct the vector $x(\text{char})$ as displayed above. If you think about it, the only information that we need about $x$ is which component is switched on. In other words, the only information we need is $h(\text{char})$, in our case ord(char). So, the most efficient representation for our sparse feature vectors (single integers) turns out to be incredibly simple. For instance, the sparse representation of the phrase "Hello, world!" is simply: python In [1]: x = [ord(char) for char in "Hello, world!"] In [2]: x Out[2]: [72, 101, 108, 108, 111, 44, 32, 119, 111, 114, 108, 100, 33] Actually, we need to append an end-of-sequence (EOS) character to tell our model to stop generating more text. Let's set the index 0 aside for the EOS character, then we one-hot encode our phrase as follows: python In [1]: x = [ord(char) + 1 for char in "Hello, world!"] + [0] In [2]: x Out[2]: [73, 102, 109, 109, 112, 45, 33, 120, 112, 115, 109, 101, 34, 0] To go from a list of indices to a one-hot encoded vector in Tensorflow is super easy using tf.one_hot: ```python n_chars = 129 x_indices = tf.constant([73, 102, 109, 109, 112]) x_one_hot = tf.one_hot(x_indices, n_chars) # shape = (5, 129) ``` Dense representation: embeddings If we only have a few input characters, we can use the one-hot encoded representation directly as our input. In reality, though, we know that text consists of a large number characters (in our case 129). In this case it's either infeasible or at best highly inefficient to use the sparse representation for our characters. Moreover, the sparse representation has no notion of proximity between characters such as 'a' and 'A' or more subtly 'i' and 'y'. A trick that we often use is to translate the high-dimensional sparse feature vectors to low-dimensional dense vectors. These dense vectors are called embeddings. Because the embeddings are low-dimensional, our model needs to learn far fewer weights. Of course, the model does need to learn the embeddings themselves, but this is a trade-off that does pay off. One of the interesting properties of embeddings is that the embedding for 'a' and 'A' are very similar, which means that the rest our network can focus on learning more abstract relations between characters. Another point of view is that learning embeddings is kind of like having an automated pre-processing step included in the model. Pre-processing in such an end-to-end setting ensures optimal performance in the task that we're actually interested in. An embedding matrix in Tensorflow must have the shape (n_chars, emd_dim), where n_chars is the number of characters (or tokens) and emb_dim is the dimensionality of the dense embedding vector space. We typically initialize the embedding matrix randomly, e.g. python n_chars = 129 emb_dim = 10 emb = tf.Variable(tf.random_uniform([n_chars, emb_dim])) Then, in order to get the relevant embeddings we could use the one-hot encoded (sparse) representation x_one_hot (see above) as a mask: python x_dense = tf.matmul(x_one_hot, emb) There's a more efficient way of doing this, though. For this we use Tensorflow's embedding lookup function: python x_dense = tf.nn.embedding_lookup(emb, x_indices) The reason why this is more efficient is that avoid constructing x_one_hot explicitly (x_indices is enough). In the training process, our model will learn an appropriate embedding matrix emb alongside the rest of the model parameters. Below, we show a visual representation of the character embeddings as well as the mini-batched dense input tensor. We have supplied a simple encoder in the utils module, which implements the procedure explained above (plus some more):
from utils import SentenceEncoder sents = ["Hello, world!", "Hi again!", "Bye bye now."] encoder = SentenceEncoder(sents, batch_size=2) for batch in encoder: seq = batch[0] print encoder.decode(seq) print seq print
text_data_representation.ipynb
KristianHolsheimer/tensorflow_training
gpl-3.0
ExerciseIn this exercise we're going to the functions that we just learned about to translate text into numeric input tensors. A) A simple character encoder. Using the examples above, write a simple encoder that takes the sentences python sents = ['Hello, world!', 'Bye bye.'] and returns both the encoded sentences.
# input sentences sents = ['Hello, world!', 'Bye bye.'] # this is the expected output out = [[ 73, 102, 109, 109, 112, 45, 33, 120, 112, 115, 109, 101, 34, 0], [ 67, 122, 102, 33, 99, 122, 102, 47, 0, 0, 0, 0, 0, 0]] def encode(sents): '<your code here>' print encode(sents) np.testing.assert_array_equal(out, encode(sents)) # %load sol/ex_char_encoder.py
text_data_representation.ipynb
KristianHolsheimer/tensorflow_training
gpl-3.0
B) Get sparse representation. Create a one-hot encoded (sparse) representation of the sentences that we encoded above.
# clear any previous computation graphs tf.reset_default_graph() # dimensions n_chars = '<your code here>' batch_size = '<your code here>' max_seqlen = '<your code here>' # input placeholder sents_enc = '<your code here>' # sparse representation x_one_hot = '<your code here>' # input sents = ['Hello, world!', 'Bye bye.'] with tf.Session() as s: '<your code here>' # %load sol/ex_one_hot.py
text_data_representation.ipynb
KristianHolsheimer/tensorflow_training
gpl-3.0
C) Get dense representation. Same as the previous exercise, except now use an embedding matrix to create a dense representation of the sentences.
# clear any previous computation graphs tf.reset_default_graph() # dimensions n_chars = '<your code here>' batch_size = '<your code here>' emb_dim = '<your code here>' max_seqlen = '<your code here>' # input placeholder sents_enc = '<your code here>' # character embeddings emb = '<your code here>' # dense representation x_dense = '<your code here>' # input sents = ['Hello, world!', 'Bye bye.'] with tf.Session() as s: '<your code here>' # %load sol/ex_embedding_lookup.py
text_data_representation.ipynb
KristianHolsheimer/tensorflow_training
gpl-3.0
If the iterables to be combined are not all known in advance, or need to be evaluated lazily, chain.from_iterable() can be used to construct the chain instead.
from itertools import * def make_iterables_to_chain(): yield [1, 2, 3] yield ['a', 'b', 'c'] for i in chain.from_iterable(make_iterables_to_chain()): print(i, end=' ') print()
algorithm/iterator.ipynb
scotthuang1989/Python-3-Module-of-the-Week
apache-2.0
The built-in function zip() returns an iterator that combines the elements of several iterators into tuples.
for i in zip([1, 2, 3], ['a', 'b', 'c']): print(i)
algorithm/iterator.ipynb
scotthuang1989/Python-3-Module-of-the-Week
apache-2.0
zip() stops when the first input iterator is exhausted. To process all of the inputs, even if the iterators produce different numbers of values, use zip_longest(). By default, zip_longest() substitutes None for any missing values. Use the fillvalue argument to use a different substitute value.
from itertools import * r1 = range(3) r2 = range(2) print('zip stops early:') print(list(zip(r1, r2))) r1 = range(3) r2 = range(2) print('\nzip_longest processes all of the values:') print(list(zip_longest(r1, r2)))
algorithm/iterator.ipynb
scotthuang1989/Python-3-Module-of-the-Week
apache-2.0
The islice() function returns an iterator which returns selected items from the input iterator, by index.
from itertools import * print('Stop at 5:') for i in islice(range(100), 5): print(i, end=' ') print('\n') print('Start at 5, Stop at 10:') for i in islice(range(100), 5, 10): print(i, end=' ') print('\n') print('By tens to 100:') for i in islice(range(100), 0, 100, 10): print(i, end=' ') print('\n')
algorithm/iterator.ipynb
scotthuang1989/Python-3-Module-of-the-Week
apache-2.0
The tee() function returns several independent iterators (defaults to 2) based on a single original input. tee() has semantics similar to the Unix tee utility, which repeats the values it reads from its input and writes them to a named file and standard output. The iterators returned by tee() can be used to feed the same set of data into multiple algorithms to be processed in parallel.
from itertools import * r = islice(count(), 5) i1, i2 = tee(r) print('i1:', list(i1)) print('i2:', list(i2))
algorithm/iterator.ipynb
scotthuang1989/Python-3-Module-of-the-Week
apache-2.0
The new iterators created by tee() share their input, so the original iterator should not be used after the new ones are created.
from itertools import * r = islice(count(), 5) i1, i2 = tee(r) print('r:', end=' ') for i in r: print(i, end=' ') if i > 1: break print() print('i1:', list(i1)) print('i2:', list(i2))
algorithm/iterator.ipynb
scotthuang1989/Python-3-Module-of-the-Week
apache-2.0
Converting Inputs The built-in map() function returns an iterator that calls a function on the values in the input iterators, and returns the results. It stops when any input iterator is exhausted.
def times_two(x): return 2 * x def multiply(x, y): return (x, y, x * y) print('Doubles:') for i in map(times_two, range(5)): print(i) print('\nMultiples:') r1 = range(5) r2 = range(5, 10) for i in map(multiply, r1, r2): print('{:d} * {:d} = {:d}'.format(*i)) print('\nStopping:') r1 = range(5) r2 = range(2) for i in map(multiply, r1, r2): print(i)
algorithm/iterator.ipynb
scotthuang1989/Python-3-Module-of-the-Week
apache-2.0
The starmap() function is similar to map(), but instead of constructing a tuple from multiple iterators, it splits up the items in a single iterator as arguments to the mapping function using the * syntax.
from itertools import * values = [(0, 5), (1, 6), (2, 7), (3, 8), (4, 9)] for i in starmap(lambda x, y: (x, y, x * y), values): print('{} * {} = {}'.format(*i))
algorithm/iterator.ipynb
scotthuang1989/Python-3-Module-of-the-Week
apache-2.0
Producing new values The count() function returns an iterator that produces consecutive integers, indefinitely. The first number can be passed as an argument (the default is zero). There is no upper bound argument (see the built-in range() for more control over the result set).
from itertools import * for i in zip(count(1), ['a', 'b', 'c']): print(i)
algorithm/iterator.ipynb
scotthuang1989/Python-3-Module-of-the-Week
apache-2.0
The start and step arguments to count() can be any numerical values that can be added together.
import fractions from itertools import * start = fractions.Fraction(1, 3) step = fractions.Fraction(1, 3) for i in zip(count(start, step), ['a', 'b', 'c']): print('{}: {}'.format(*i))
algorithm/iterator.ipynb
scotthuang1989/Python-3-Module-of-the-Week
apache-2.0
The cycle() function returns an iterator that repeats the contents of the arguments it is given indefinitely. Since it has to remember the entire contents of the input iterator, it may consume quite a bit of memory if the iterator is long.
from itertools import * for i in zip(range(7), cycle(['a', 'b', 'c'])): print(i)
algorithm/iterator.ipynb
scotthuang1989/Python-3-Module-of-the-Week
apache-2.0
The repeat() function returns an iterator that produces the same value each time it is accessed.
from itertools import * for i in repeat('over-and-over', 5): print(i)
algorithm/iterator.ipynb
scotthuang1989/Python-3-Module-of-the-Week
apache-2.0
Filtering The dropwhile() function returns an iterator that produces elements of the input iterator after a condition becomes false for the first time. dropwhile() does not filter every item of the input; after the condition is false the first time, all of the remaining items in the input are returned.
from itertools import * def should_drop(x): print('Testing:', x) return x < 1 for i in dropwhile(should_drop, [-1, 0, 1, 2, -2]): print('Yielding:', i)
algorithm/iterator.ipynb
scotthuang1989/Python-3-Module-of-the-Week
apache-2.0
The opposite of dropwhile() is takewhile(). It returns an iterator that returns items from the input iterator as long as the test function returns true.
from itertools import * def should_take(x): # print('Testing:', x) return x < 2 for i in takewhile(should_take, [-1, 0, 1, 2, -2]): print('Yielding:', i)
algorithm/iterator.ipynb
scotthuang1989/Python-3-Module-of-the-Week
apache-2.0
The built-in function filter() returns an iterator that includes only items for which the test function returns true. filter() is different from dropwhile() and takewhile() in that every item is tested before it is returned.
from itertools import * def check_item(x): print('Testing:', x) return x < 1 for i in filter(check_item, [-1, 0, 1, 2, -2]): print('Yielding:', i)
algorithm/iterator.ipynb
scotthuang1989/Python-3-Module-of-the-Week
apache-2.0
filterfalse() returns an iterator that includes only items where the test function returns false. compress() offers another way to filter the contents of an iterable. Instead of calling a function, it uses the values in another iterable to indicate when to accept a value and when to ignore it. The first argument is the data iterable to process and the second is a selector iterable producing Boolean values indicating which elements to take from the data input (a true value causes the value to be produced, a false value causes it to be ignored).
from itertools import * every_third = cycle([False, False, True]) data = range(1, 10) for i in compress(data, every_third): print(i, end=' ') print()
algorithm/iterator.ipynb
scotthuang1989/Python-3-Module-of-the-Week
apache-2.0
Grouping Data The groupby() function returns an iterator that produces sets of values organized by a common key. This example illustrates grouping related values based on an attribute. The input sequence needs to be sorted on the key value in order for the groupings to work out as expected.
import functools from itertools import * import operator import pprint @functools.total_ordering class Point: def __init__(self, x, y): self.x = x self.y = y def __repr__(self): return '({}, {})'.format(self.x, self.y) def __eq__(self, other): return (self.x, self.y) == (other.x, other.y) def __gt__(self, other): return (self.x, self.y) > (other.x, other.y) # Create a dataset of Point instances data = list(map(Point, cycle(islice(count(), 3)), islice(count(), 7))) print('Data:') pprint.pprint(data, width=35) print() # Try to group the unsorted data based on X values print('Grouped, unsorted:') for k, g in groupby(data, operator.attrgetter('x')): print(k, list(g)) print() # Sort the data data.sort() print('Sorted:') pprint.pprint(data, width=35) print() # Group the sorted data based on X values print('Grouped, sorted:') for k, g in groupby(data, operator.attrgetter('x')): print(k, list(g)) print()
algorithm/iterator.ipynb
scotthuang1989/Python-3-Module-of-the-Week
apache-2.0
Combining Inputs The accumulate() function processes the input iterable, passing the nth and n+1st item to a function and producing the return value instead of either input. The default function used to combine the two values adds them, so accumulate() can be used to produce the cumulative sum of a series of numerical inputs.
from itertools import * print(list(accumulate(range(5)))) print(list(accumulate('abcde')))
algorithm/iterator.ipynb
scotthuang1989/Python-3-Module-of-the-Week
apache-2.0
It is possible to combine accumulate() with any other function that takes two input values to achieve different results.
from itertools import * def f(a, b): print(a, b) return b + a + b print(list(accumulate('abcde', f)))
algorithm/iterator.ipynb
scotthuang1989/Python-3-Module-of-the-Week
apache-2.0
Nested for loops that iterate over multiple sequences can often be replaced with product(), which produces a single iterable whose values are the Cartesian product of the set of input values.
from itertools import * import pprint FACE_CARDS = ('J', 'Q', 'K', 'A') SUITS = ('H', 'D', 'C', 'S') DECK = list( product( chain(range(2, 11), FACE_CARDS), SUITS, ) ) for card in DECK: print('{:>2}{}'.format(*card), end=' ') if card[1] == SUITS[-1]: print()
algorithm/iterator.ipynb
scotthuang1989/Python-3-Module-of-the-Week
apache-2.0
The values produced by product() are tuples, with the members taken from each of the iterables passed in as arguments in the order they are passed. The first tuple returned includes the first value from each iterable. The last iterable passed to product() is processed first, followed by the next to last, and so on. The result is that the return values are in order based on the first iterable, then the next iterable, etc. The permutations() function produces items from the input iterable combined in the possible permutations of the given length. It defaults to producing the full set of all permutations.
from itertools import * def show(iterable): first = None for i, item in enumerate(iterable, 1): if first != item[0]: if first is not None: print() first = item[0] print(''.join(item), end=' ') print() print('All permutations:\n') show(permutations('abcd')) print('\nPairs:\n') show(permutations('abcd', r=2))
algorithm/iterator.ipynb
scotthuang1989/Python-3-Module-of-the-Week
apache-2.0
To limit the values to unique combinations rather than permutations, use combinations(). As long as the members of the input are unique, the output will not include any repeated values.
from itertools import * def show(iterable): first = None for i, item in enumerate(iterable, 1): if first != item[0]: if first is not None: print() first = item[0] print(''.join(item), end=' ') print() print('Unique pairs:\n') show(combinations('abcd', r=2))
algorithm/iterator.ipynb
scotthuang1989/Python-3-Module-of-the-Week
apache-2.0
While combinations() does not repeat individual input elements, sometimes it is useful to consider combinations that do include repeated elements. For those cases, use combinations_with_replacement().
from itertools import * def show(iterable): first = None for i, item in enumerate(iterable, 1): if first != item[0]: if first is not None: print() first = item[0] print(''.join(item), end=' ') print() print('Unique pairs:\n') show(combinations_with_replacement('abcd', r=2))
algorithm/iterator.ipynb
scotthuang1989/Python-3-Module-of-the-Week
apache-2.0
Compute source space connectivity and visualize it using a circular graph This example computes the all-to-all connectivity between 68 regions in source space based on dSPM inverse solutions and a FreeSurfer cortical parcellation. The connectivity is visualized using a circular graph which is ordered based on the locations of the regions in the axial plane.
# Authors: Martin Luessi <mluessi@nmr.mgh.harvard.edu> # Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr> # Nicolas P. Rougier (graph code borrowed from his matplotlib gallery) # # License: BSD (3-clause) import numpy as np import matplotlib.pyplot as plt import mne from mne.datasets import sample from mne.minimum_norm import apply_inverse_epochs, read_inverse_operator from mne.connectivity import spectral_connectivity from mne.viz import circular_layout, plot_connectivity_circle print(__doc__)
0.18/_downloads/d71abe904faddac1a89e44f2986e07fa/plot_mne_inverse_label_connectivity.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Get Mendeley API auth parameters First you will need to generate a client ID and client secret from there : http://dev.mendeley.com/myapps.html. Then, put your client ID in client secret here:
client_id = "1988" client_secret = "CXhCJQKZ8HUrtFtg"
Bibliography/PubPeer/notebook.ipynb
hadim/public_notebooks
mit
Note that they are my personal credentials and at the time you read it they will be obsolete. Now let's start the auth process to the Mendeley API.
redirect_uri = 'https://localhost' authorization_base_url = "https://api.mendeley.com/oauth/authorize" token_url = "https://api.mendeley.com/oauth/token" oauth = OAuth2Session(client_id, redirect_uri=redirect_uri, scope=['all']) authorization_url, state = oauth.authorization_url(authorization_base_url, access_type="offline", approval_prompt="force") print('Please go to {} and authorize access.'.format(authorization_url))
Bibliography/PubPeer/notebook.ipynb
hadim/public_notebooks
mit
Now paste the fallback url here :
authorization_code = "https://localhost/?code=6fBBP91iqtnu-xPdTlsqCDVroYA&state=3sX7ggAfEip4OxnGff9pOfYqNb1BTM"
Bibliography/PubPeer/notebook.ipynb
hadim/public_notebooks
mit
Authenticate
token = oauth.fetch_token(token_url, authorization_response=authorization_code, client_secret=client_secret) mendeley = Mendeley(client_id, client_secret, redirect_uri=redirect_uri) session = MendeleySession(mendeley, token=token)
Bibliography/PubPeer/notebook.ipynb
hadim/public_notebooks
mit
Iterate over all your articles and record them into a Pandas Dataframe
articles = [] all_documents = session.documents.list() for doc in tqdm.tqdm(session.documents.iter(), total=all_documents.count): if doc.identifiers: d = {} d['title'] = doc.title d['year'] = doc.year d['source'] = doc.source d['doi'] = doc.identifiers['doi'] if 'doi' in doc.identifiers.keys() else None d['pmid'] = doc.identifiers['pmid'] if 'pmid' in doc.identifiers.keys() else None if doc.authors: authors = ["{}, {}".format(author.first_name, author.last_name) for author in doc.authors] d['authors'] = " - ".join(authors) articles.append(d) articles = pd.DataFrame(articles) print("You have {} articles with correct identifiers (DOI or PMID)".format(articles.shape[0]))
Bibliography/PubPeer/notebook.ipynb
hadim/public_notebooks
mit
Lets find matches with PubPeer
pd.options.mode.chained_assignment = None #articles.loc[0, 'doi'] = "10.5772/22496" articles['comments'] = 0 articles['comments_link'] = None old_n = -1 for i in range(1, 179): print(i) url = "http://api.pubpeer.com/v1/publications/dump/{page}?devkey=9bb8f08ebef172ec518f5a4504344ceb" r = requests.get(url.format(page=i)) all_pub = r.json()['publications'] if all_pub: for pp in all_pub: if 'doi' in pp.keys(): articles.loc[:, 'comments'][articles['doi'] == pp['doi']] += 1 articles.loc[:, 'comments_link'][articles['doi'] == pp['doi']] = pp['link'] n_comm = (articles['comments'] >= 1).sum() if n_comm > 0 and n_comm > old_n: print("Commented articles = {}".format(n_comm)) old_n = n_comm articles[articles['comments'] >= 1]
Bibliography/PubPeer/notebook.ipynb
hadim/public_notebooks
mit
Normalize the data Now that you've loaded the training data, normalize the input so that it has a mean of 0 and a range between -0.5 and 0.5.
# TODO: Implement data normalization here. def normalize_color(image_data): """ Normalize the image data with Min-Max scaling to a range of [0.1, 0.9] :param image_data: The image data to be normalized :return: Normalized image data """ a = -0.5 b = +0.5 Xmin = 0.0 Xmax = 255.0 norm_img = np.empty_like(image_data, dtype=np.float32) norm_img = a + (image_data - Xmin)*(b-a)/(Xmax - Xmin) return norm_img X_train = normalize_color(X_train) X_test = normalize_color(X_test) # STOP: Do not change the tests below. Your implementation should pass these tests. assert(round(np.mean(X_train)) == 0), "The mean of the input data is: %f" % np.mean(X_train) assert(np.min(X_train) == -0.5 and np.max(X_train) == 0.5), "The range of the input data is: %.1f to %.1f" % (np.min(X_train), np.max(X_train))
Exercises/Term1/keras-lab/traffic-sign-classification-with-keras.ipynb
thomasantony/CarND-Projects
mit
Build a Two-Layer Feedfoward Network The code you've written so far is for data processing, not specific to Keras. Here you're going to build Keras-specific code. Build a two-layer feedforward neural network, with 128 neurons in the fully-connected hidden layer. To get started, review the Keras documentation about models and layers. The Keras example of a Multi-Layer Perceptron network is similar to what you need to do here. Use that as a guide, but keep in mind that there are a number of differences.
from keras.models import Sequential from keras.layers.core import Dense, Activation from keras.optimizers import Adam from keras.utils import np_utils # TODO: Build a two-layer feedforward neural network with Keras here. model = Sequential() model.add(Dense(128, input_shape=(flat_img_size,), name='hidden1')) model.add(Activation('relu')) model.add(Dense(43, name='output')) model.add(Activation('softmax')) # STOP: Do not change the tests below. Your implementation should pass these tests. assert(model.get_layer(name="hidden1").input_shape == (None, 32*32*3)), "The input shape is: %s" % model.get_layer(name="hidden1").input_shape assert(model.get_layer(name="output").output_shape == (None, 43)), "The output shape is: %s" % model.get_layer(name="output").output_shape
Exercises/Term1/keras-lab/traffic-sign-classification-with-keras.ipynb
thomasantony/CarND-Projects
mit
Train the Network Compile and train the network for 2 epochs. Use the adam optimizer, with categorical_crossentropy loss. Hint 1: In order to use categorical cross entropy, you will need to one-hot encode the labels. Hint 2: In order to pass the input images to the fully-connected hidden layer, you will need to reshape the input. Hint 3: Keras's .fit() method returns a History.history object, which the tests below use. Save that to a variable named history.
# One-Hot encode the labels Y_train = np_utils.to_categorical(y_train, n_classes) Y_test = np_utils.to_categorical(y_test, n_classes) # Reshape input for MLP X_train_mlp = X_train.reshape(-1, flat_img_size) X_test_mlp = X_test.reshape(-1, flat_img_size) model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) history = model.fit(X_train_mlp, Y_train, batch_size=128, nb_epoch=10, validation_data=(X_test_mlp, Y_test), verbose=1) # STOP: Do not change the tests below. Your implementation should pass these tests. assert(history.history['acc'][0] > 0.5), "The training accuracy was: %.3f" % history.history['acc'][0]
Exercises/Term1/keras-lab/traffic-sign-classification-with-keras.ipynb
thomasantony/CarND-Projects
mit
Validate the Network Split the training data into a training and validation set. Measure the validation accuracy of the network after two training epochs. Hint: Use the train_test_split() method from scikit-learn.
# Get randomized datasets for training and validation X_train, X_val, Y_train, Y_val = train_test_split( X_train, Y_train, test_size=0.25, random_state=0xdeadbeef) X_val_mlp = X_val.reshape(-1, flat_img_size) print('Training features and labels randomized and split.') # STOP: Do not change the tests below. Your implementation should pass these tests. assert(round(X_train.shape[0] / float(X_val.shape[0])) == 3), "The training set is %.3f times larger than the validation set." % (X_train.shape[0] / float(X_val.shape[0])) assert(history.history['val_acc'][0] > 0.6), "The validation accuracy is: %.3f" % history.history['val_acc'][0] loss, acc = model.evaluate(X_val.reshape(-1, flat_img_size), Y_val, verbose=1) print('\nValidation accuracy : {0:>6.2%}'.format(acc))
Exercises/Term1/keras-lab/traffic-sign-classification-with-keras.ipynb
thomasantony/CarND-Projects
mit