markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
At this point we can "compare" the output of the three different methods we used, again by using the zip function.
list(zip(tagged_text_baseline[:20], tagged_text_cltk[:20], tagged_text_nltk[:20])) for baseline_out, cltk_out, nltk_out in zip(tagged_text_baseline[:20], tagged_text_cltk[:20], tagged_text_nltk[:20]): print("Baseline: %s\nCLTK: %s\nNLTK: %s\n"%(baseline_out, cltk_out, nltk_out))
participants_notebooks/Sunoikisis - Named Entity Extraction 1b-LV.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
Excercise Extract the named entities from the English translation of the De Bello Gallico book 1. The CTS URN for this translation is urn:cts:latinLit:phi0448.phi001.perseus-eng2:1.
# Modify the code above to use the English model of the Stanford tagger instead of the italian one. Hint: stanford_model_english = "/opt/nlp/stanford-tools/stanford-ner-2015-12-09/classifiers/english.muc.7class.distsim.crf.ser.gz"
participants_notebooks/Sunoikisis - Named Entity Extraction 1b-LV.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
Exercise 2.2 Implement Backpropagation for an MLP in Numpy and train it Instantiate the feed-forward model class and optimization parameters. This models follows the architecture described in Algorithm 10.
# Model geometry = [corpus.nr_features, 20, 2] activation_functions = ['sigmoid', 'softmax'] # Optimization learning_rate = 0.05 num_epochs = 10 batch_size = 30 from lxmls.deep_learning.numpy_models.mlp import NumpyMLP model = NumpyMLP( geometry=geometry, activation_functions=activation_functions, learnin...
labs/notebooks/non_linear_classifiers/exercise_2.ipynb
LxMLS/lxmls-toolkit
mit
Milestone 1: Open the code for this model. This is located in lxmls/deep_learning/numpy_models/mlp.py Implement the method backpropagation() in the class NumpyMLP using Backpropagation recursion that we just saw. As a first step focus on getting the gradients of each layer, one at a time. Use the code below to plot t...
from lxmls.deep_learning.mlp import get_mlp_parameter_handlers, get_mlp_loss_range # Get functions to get and set values of a particular weight of the model get_parameter, set_parameter = get_mlp_parameter_handlers( layer_index=1, is_bias=False, row=0, column=0 ) # Get batch of data batch = data.batc...
labs/notebooks/non_linear_classifiers/exercise_2.ipynb
LxMLS/lxmls-toolkit
mit
Once you have implemented at least the gradient of the last layer. You can start checking if the values match
# Get the gradient value for that weight gradients = model.backpropagation(batch['input'], batch['output']) current_gradient = get_parameter(gradients)
labs/notebooks/non_linear_classifiers/exercise_2.ipynb
LxMLS/lxmls-toolkit
mit
Now you can plot the values of the loss around a given parameters value versus the gradient. If you have implemented this correctly the gradient should be tangent to the loss at the current weight value, see Figure 3.5. Once you have completed the exercise, you should be able to plot also the gradients of the other lay...
# Use this to know the non-zero values of the input (that have non-zero gradient) batch['input'][0].nonzero()
labs/notebooks/non_linear_classifiers/exercise_2.ipynb
LxMLS/lxmls-toolkit
mit
Copy the following code for plotting
%matplotlib inline import matplotlib.pyplot as plt # Plot empirical plt.plot(weight_range, loss_range) plt.plot(current_weight, current_loss, 'xr') plt.ylabel('loss value') plt.xlabel('weight value') # Plot real h = plt.plot( weight_range, current_gradient*(weight_range - current_weight) + current_loss, 'r...
labs/notebooks/non_linear_classifiers/exercise_2.ipynb
LxMLS/lxmls-toolkit
mit
Milestone 2: After you have ensured that your Backpropagation algorithm is correct, you can train a model with the data we have.
# Get batch iterators for train and test train_batches = data.batches('train', batch_size=batch_size) test_set = data.batches('test', batch_size=None)[0] # Epoch loop for epoch in range(num_epochs): # Batch loop for batch in train_batches: model.update(input=batch['input'], output=batch['output']) ...
labs/notebooks/non_linear_classifiers/exercise_2.ipynb
LxMLS/lxmls-toolkit
mit
Contents I. Create and minimize the molecule II. Select the torsional bond III. Rigid rotation scan IV. Relaxed rotation scan V. Plot the potential energy surfaces VI. Investigate conformational changes I. Create and minimize the molecule
mol = mdt.from_smiles('CCCC') mol.draw() mol.set_energy_model(mdt.models.GAFF) mol.energy_model.configure() minimization = mol.minimize(nsteps=40) minimization.draw()
moldesign/_notebooks/Example 5. Enthalpic barriers.ipynb
tkzeng/molecular-design-toolkit
apache-2.0
IV. Relaxed rotation scan Next, we'll get the right barrier (up to the accuracy of the energy model). Here, we'll rotate around the bond, but then perform a constrained minimization at each rotation point. This will allow all other degrees of freedom to relax, thus finding lower energies at each point along the path. ...
constraint = twist.constrain() relaxed = mdt.Trajectory(mol) for angle in angles: print angle,':', #add random noise to break symmetry mol.positions += np.random.random(mol.positions.shape) * 0.01*u.angstrom mol.positions -= mol.center_of_mass twist.value = angle constraint.value = an...
moldesign/_notebooks/Example 5. Enthalpic barriers.ipynb
tkzeng/molecular-design-toolkit
apache-2.0
Au lieu d'imprimer l'objet numpy, on peut voir sa représentation directement
array1
Les matrices numpy.ipynb
aroberge/notebooks-fr
cc0-1.0
Créons une deuxième liste ainsi qu'une liste de listes
ma_liste2 = [10, 20, 30, 40, 50] mes_listes = [ma_liste, ma_liste2] print("mes listes: \n", mes_listes) print("-"*40) matrice = np.array(mes_listes) print("La matrice: \n", matrice) print("-"*40) print("Représentation de la matrice:") matrice # pour voir sa représentation
Les matrices numpy.ipynb
aroberge/notebooks-fr
cc0-1.0
Nous avons donc une matrice avec 2 lignes et 5 colonnes, dont les éléments sont des entiers.
print("shape = ", matrice.shape) # information sur nombre de lignes et colonnes print("type = ", matrice.dtype) # information sur le contenu
Les matrices numpy.ipynb
aroberge/notebooks-fr
cc0-1.0
On peut automatiquement créer des matrices particulières, soient remplies avec des zéros, des 1, ou la matrice identité (qui est une matrice carrée par définition.
print(np.zeros(4)) print('-'*40) print(np.zeros([3,3])) print('-'*40) print(np.ones([3,2])) print('-'*40) print(np.eye(4)) # matrice identité
Les matrices numpy.ipynb
aroberge/notebooks-fr
cc0-1.0
Numpy inclut une généralisation de la fonction "range" the Python, mais permettant des valeurs non-entières.
np.arange(1, 5, 0.2)
Les matrices numpy.ipynb
aroberge/notebooks-fr
cc0-1.0
Opérations élémentaires sur les matrices (arrays) numpy Par défaut, les opérations sont faites sur chaque élément de la matrice. Nous allons démontrer ceci en créant deux matrices de la même taille (même nombre de lignes et de colonnes) et en faisant diverses opérations. Nous commençons avec des opérations sur une seu...
mat1 = np.array([ [1, 2, 3], [4, 5, 6]]) print(3 * mat1) print("-"*40) print(mat1 / 2) print("-"*40) print(1 / mat1) print("-"*40) print(mat1 % 3) print("-"*40) print(mat1 + 20) print("-"*40)
Les matrices numpy.ipynb
aroberge/notebooks-fr
cc0-1.0
Nous considérons maintenant des opérations avec deux matrices.
mat2 = np.array([ [11, 12, 13], [14, 15, 16]]) print(mat1 + mat2) print("-"*40) print(mat2 - mat1) print("-"*40) print(mat1 * mat1) # IMPORTANT: ceci n'est PAS la multiplication normale de matrices
Les matrices numpy.ipynb
aroberge/notebooks-fr
cc0-1.0
Si on essaie de faire de telles opérations sur des matrices de taille différente, cela ne fonctionne pas.
mat3 = np.array([[1, 2], [3, 4]]) mat3 * mat1
Les matrices numpy.ipynb
aroberge/notebooks-fr
cc0-1.0
Estimativa de $\lambda$ em cada iteração ímpar: Abaixo, número de iterações necessárias, valor próprio encontrado ao final, vector próprio associado, valor próprio do passo anterior e seu vector próprio, e diferença entre último e penúltimo vectores próprios encontrados.
npower_eps(A, 1e-5)
Física Computacional/Ficha 6.ipynb
risantos/schoolwork
mit
Cálculo utilizando o método sem critério de parada, mas em vez disso, em função do k Retornando: + Iteração + Vector Normalizado + Valor próprio
npower(A, 6) npower(A, 12)
Física Computacional/Ficha 6.ipynb
risantos/schoolwork
mit
Cálculo de Valores e Vectores Próprios utilizando uma função interna do numpy
linalg.eig(A)
Física Computacional/Ficha 6.ipynb
risantos/schoolwork
mit
Exercício 2 - Método de Jacobi Cíclico
def jacobi(A_i, eps, nitmax): A = np.copy(A_i) # para cortar dependências m = len(A) iteration = 0 Q = np.identity(m) def off(mat): off_sum = 0 for i in range(m): for j in range(m): if j != i: off_sum += mat[i, j]**2 return np.sqrt(off_s...
Física Computacional/Ficha 6.ipynb
risantos/schoolwork
mit
The first step is to create and object using the SampleSize class with the parameter of interest, the sample size calculation method, and the stratification status. In this example, we want to calculate sample size for proportions, using wald method for a stratified design. This is achived with the following snippet of...
# target coverage rates expected_coverage = { "Dakar": 0.849, "Ziguinchor": 0.809, "Diourbel": 0.682, "Saint-Louis": 0.806, "Tambacounda": 0.470, "Kaolack": 0.797, "Thies": 0.834, "Louga": 0.678, "Fatick": 0.766, "Kolda": 0.637, "Matam": 0.687, "Kaffrine": 0.766, "Ked...
docs/source/_build/html/tutorial/sample_size_calculation.ipynb
survey-methods/samplics
mit
SampleSize calculates the sample sizes and store the in teh samp_size attributes which is a python dictinary object. If a dataframe is better suited for the use case, the method to_dataframe() can be used to create a pandas dataframe.
sen_vaccine_wald_size = sen_vaccine_wald.to_dataframe() sen_vaccine_wald_size
docs/source/_build/html/tutorial/sample_size_calculation.ipynb
survey-methods/samplics
mit
The sample size calculation above assumes that the design effect (DEFF) was equal to 1. A design effect of 1 correspond to sampling design with a variance equivalent to a simple random selection of same sample size. In the context of complex sampling designs, DEFF is often different from 1. Stage sampling and unequal w...
sen_vaccine_wald.calculate(target=expected_coverage, precision=0.07, deff=1.401 ** 2) sen_vaccine_wald.to_dataframe()
docs/source/_build/html/tutorial/sample_size_calculation.ipynb
survey-methods/samplics
mit
Since the sample design is stratified, the sample size calculation will be more precised if DEFF is specified at the stratum level which is available from the 2017 Senegal DHS provided report. Some regions have a design effect below 1. To be conservative with our sample size calculation, we will use 1.21 as the minimum...
# Target coverage rates expected_deff = { "Dakar": 1.100 ** 2, "Ziguinchor": 1.100 ** 2, "Diourbel": 1.346 ** 2, "Saint-Louis": 1.484 ** 2, "Tambacounda": 1.366 ** 2, "Kaolack": 1.360 ** 2, "Thies": 1.109 ** 2, "Louga": 1.902 ** 2, "Fatick": 1.100 ** 2, "Kolda": 1.217 ** 2, "...
docs/source/_build/html/tutorial/sample_size_calculation.ipynb
survey-methods/samplics
mit
The sample size calculation above does not account for attrition of sample sizes due to non-response. In the 2017 Semegal DHS, the overal household and women reponse rate was abou 94.2%.
# Calculate sample sizes with a resp_rate of 94.2% sen_vaccine_wald.calculate( target=expected_coverage, precision=0.07, deff=expected_deff, resp_rate=0.942 ) # Convert sample sizes to a dataframe sen_vaccine_wald.to_dataframe( col_names=["region", "vaccine_coverage", "precision", "number_12_23_months"] )
docs/source/_build/html/tutorial/sample_size_calculation.ipynb
survey-methods/samplics
mit
Fleiss method The World Health OR=rganization (WHO) recommends using the Fleiss method for calculating sample size for vaccination coverage survey (see https://www.who.int/immunization/documents/who_ivb_18.09/en/). To use the Fleiss method, the examples shown above are the same with method="fleiss".
sen_vaccine_fleiss = SampleSize( parameter="proportion", method="fleiss", stratification=True ) sen_vaccine_fleiss.calculate( target=expected_coverage, precision=0.07, deff=expected_deff, resp_rate=0.942 ) sen_vaccine_sample = sen_vaccine_fleiss.to_dataframe( col_names=["region", "vaccine_coverage", "pre...
docs/source/_build/html/tutorial/sample_size_calculation.ipynb
survey-methods/samplics
mit
At this point, we have the number of 12-23 months needed to achieve the desired precision given the expected proportions using wald or fleiss calculation methods. Number of households To obtain the number of households, we need to know the expected average number of children aged 12-23 months per household. This inform...
sen_vaccine_sample["number_households"] = round( sen_vaccine_sample["number_12_23_months"] / 0.052, 0 ) sen_vaccine_sample
docs/source/_build/html/tutorial/sample_size_calculation.ipynb
survey-methods/samplics
mit
Finding the knee from figure 2 from the paper
def figure2(): x = np.linspace(0.0, 1, 10) with np.errstate(divide='ignore'): return x,np.true_divide(-1, x + 0.1) + 5
notebooks/kneedle_algorithm.ipynb
arvkevi/kneed
bsd-3-clause
Step 0: Raw input
x,y = figure2() if not np.array_equal(np.array(x), np.sort(x)): raise ValueError('x needs to be sorted')
notebooks/kneedle_algorithm.ipynb
arvkevi/kneed
bsd-3-clause
Step 1: Fit a spline
from scipy.interpolate import interp1d N = len(x) # Ds = the finite set of x- and y-values that define a smooth curve, # one that has been fit to a smoothing spline. uspline = interp1d(x, y) Ds_y = uspline(x) plt.plot(x, Ds_y);
notebooks/kneedle_algorithm.ipynb
arvkevi/kneed
bsd-3-clause
Step 2: Normalize the spline
def normalize(a): """return the normalized input array""" return (a - min(a)) / (max(a) - min(a)) # x and y normalized to unit square x_normalized = normalize(x) y_normalized = normalize(Ds_y)
notebooks/kneedle_algorithm.ipynb
arvkevi/kneed
bsd-3-clause
Step 3: Calculate the difference curve
# the difference curve y_difference = y_normalized - x_normalized x_difference = x_normalized.copy() plt.title("Normalized spline & difference curve"); plt.plot(x_normalized, y_normalized); plt.plot(x_difference, y_difference);
notebooks/kneedle_algorithm.ipynb
arvkevi/kneed
bsd-3-clause
Step 4: Identify local maxima of the difference curve
from scipy.signal import argrelextrema # local maxima for knees maxima_indices = argrelextrema(y_difference, np.greater)[0] x_difference_maxima = x_difference[maxima_indices] y_difference_maxima = y_difference[maxima_indices] # local minima minima_indices = argrelextrema(y_difference, np.less)[0] x_difference_minima ...
notebooks/kneedle_algorithm.ipynb
arvkevi/kneed
bsd-3-clause
Step 5: Calculate thresholds
# Sensitivity parameter S # smaller values detect knees quicker S = 1.0 Tmx = y_difference_maxima - (S * np.abs(np.diff(x_normalized).mean()))
notebooks/kneedle_algorithm.ipynb
arvkevi/kneed
bsd-3-clause
Step 6: knee finding algorithm If any difference value (xdj, ydj), where j > i, drops below the threshold y = T|mxi for (x|mxi, y|mxi) before the next local maximum in the difference curve is reached, Kneedle declares a knee at the x-value of the corresponding local maximum x = x|xi. If the difference values reach a lo...
# artificially place a local max at the last item in the x_difference array maxima_indices = np.append(maxima_indices, len(x_difference) - 1) minima_indices = np.append(minima_indices, len(x_difference) - 1) # placeholder for which threshold region i is located in. maxima_threshold_index = 0 minima_threshold_index = 0...
notebooks/kneedle_algorithm.ipynb
arvkevi/kneed
bsd-3-clause
The vertical, red dashed line represents the x value of the knee point. The horizontal greeb dashed line represents the threshold value.
knee # normalized x value where the knee was determined norm_knee
notebooks/kneedle_algorithm.ipynb
arvkevi/kneed
bsd-3-clause
There is a document in the DCC which give the ZPK filter for the noisemon circuit: LIGO-T1100378 There are four zeros at 0, four poles at 5 Hz, and two poles at 4000 Hz. I just eyeballed the gain by matching the ASDs of the filtered drive signal and the noisemon. Passing the drive signal (e.g. ETMY_L1_MASTER_LR) throug...
def project_out_drive(glitch_time, dur=64, level='L1', quadrant='LR', gain=3.5e10): pad = 8 gt = int(round(glitch_time)) st=gt-dur/2 et=gt+dur drive = TimeSeries.fetch('L1:SUS-ETMY_%s_MASTER_OUT_%s_DQ' % (level, quadrant), st-pad, et) nmon = TimeSeries.fetch('L1:S...
Notebooks/Subtraction/etmy_l1.ipynb
andrew-lundgren/detchar
gpl-3.0
The subtraction works pretty well. I adjusted the gain until the calibration lines were as low as I could get them (because they're the strongest feature and are well above the noise). The subtraction only seems to be good to about 1 part in 10. Probably the noisemon has some extra phase or the zeros/poles are not exac...
def plot_glitch(glitch, quadrant='LR'): nmon, proj, drive = project_out_drive(glitch, quadrant=quadrant) p1 = nmon.highpass(8).crop(glitch-1, glitch+1).plot(label='Original') p1.gca().plot((nmon-proj).highpass(8).crop(glitch-1, glitch+1),label='Drive removed') p1.gca().legend() plot_glitch(1183061141.7...
Notebooks/Subtraction/etmy_l1.ipynb
andrew-lundgren/detchar
gpl-3.0
Part B The lecture on statistics mentions latent variables, specifically how you cannot know what the underlying process is that's generating your data; all you have is the data, on which you have to impose certain assumptions in order to derive hypotheses about what generated the data in the first place. To illustrate...
import numpy as np np.random.seed(5735636) sample1 = np.random.normal(loc = 10, scale = 5, size = 10) sample2 = np.random.normal(loc = 10, scale = 5, size = 1000) sample3 = np.random.normal(loc = 10, scale = 5, size = 1000000) ######################### # DON'T MODIFY ANYTHING # # ABOVE THIS BLOCK # #############...
assignments/A6/A6_Q3.ipynb
eds-uga/csci1360e-su17
mit
If CPLEX is not installed, install CPLEX Community edition.
try: import cplex except: raise Exception('Please install CPLEX. See https://pypi.org/project/cplex/')
examples/mp/jupyter/sports_scheduling.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Step 2: Model the data In this scenario, the data is simple. There are eight teams in each division, and the teams must play each team in the division once and each team outside the division once. Use a Python module, Collections, which implements some data structures that will help solve some problems. Named tuples h...
# Teams in 1st division team_div1 = ["Baltimore Ravens","Cincinnati Bengals", "Cleveland Browns","Pittsburgh Steelers","Houston Texans", "Indianapolis Colts","Jacksonville Jaguars","Tennessee Titans","Buffalo Bills","Miami Dolphins", "New England Patriots","New York Jets","Denver Broncos...
examples/mp/jupyter/sports_scheduling.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Now you will import the pandas library. Pandas is an open source Python library for data analysis. It uses two data structures, Series and DataFrame, which are built on top of NumPy. A Series is a one-dimensional object similar to an array, list, or column in a table. It will assign a labeled index to each item in the ...
import pandas as pd team1 = pd.DataFrame(team_div1) team2 = pd.DataFrame(team_div2) team1.columns = ["AFC"] team2.columns = ["NFC"] teams = pd.concat([team1,team2], axis=1)
examples/mp/jupyter/sports_scheduling.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Step 3: Prepare the data Given the number of teams in each division and the number of intradivisional and interdivisional games to be played, you can calculate the total number of teams and the number of weeks in the schedule, assuming every team plays exactly one game per week. The season is split into halves, and th...
import numpy as np nb_teams = 2 * nb_teams_in_division teams = range(nb_teams) # Calculate the number of weeks necessary nb_inside_div = (nb_teams_in_division - 1) * number_of_matches_inside_division nb_outside_div = nb_teams_in_division * number_of_matches_outside_division nb_weeks = nb_inside_div + nb_outside_d...
examples/mp/jupyter/sports_scheduling.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Number of games to play between pairs depends on whether the pairing is intradivisional or not.
nb_play = { m : number_of_matches_inside_division if m.is_divisional==1 else number_of_matches_outside_division for m in matches}
examples/mp/jupyter/sports_scheduling.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Step 4: Set up the prescriptive model Create the DOcplex model The model contains all the business constraints and defines the objective.
from docplex.mp.model import Model mdl = Model("sports")
examples/mp/jupyter/sports_scheduling.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Define the decision variables
plays = mdl.binary_var_matrix(matches, weeks, lambda ij: "x_%s_%d" %(str(ij[0]), ij[1]))
examples/mp/jupyter/sports_scheduling.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Express the business constraints Each pair of teams must play the correct number of games.
mdl.add_constraints( mdl.sum(plays[m,w] for w in weeks) == nb_play[m] for m in matches) mdl.print_information()
examples/mp/jupyter/sports_scheduling.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Each team must play exactly once in a week.
mdl.add_constraints( mdl.sum(plays[m,w] for m in matches if (m.team1 == t or m.team2 == t) ) == 1 for w in weeks for t in teams) mdl.print_information()
examples/mp/jupyter/sports_scheduling.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Games between the same teams cannot be on successive weeks.
mdl.add_constraints( plays[m,w] + plays[m,w+1] <= 1 for w in weeks for m in matches if w < nb_weeks-1) mdl.print_information()
examples/mp/jupyter/sports_scheduling.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Some intradivisional games should be in the first half.
mdl.add_constraints( mdl.sum(plays[m,w] for w in first_half_weeks for m in matches if (((m.team1 == t or m.team2 == t) and m.is_divisional == 1 ))) >= nb_first_half_games for t in teams) mdl.print_information()
examples/mp/jupyter/sports_scheduling.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Express the objective The objective function for this example is designed to force intradivisional games to occur as late in the season as possible. The incentive for intradivisional games increases by week. There is no incentive for interdivisional games.
gain = { w : w*w for w in weeks} # If an intradivisional pair plays in week w, Gain[w] is added to the objective. mdl.maximize( mdl.sum (m.is_divisional * gain[w] * plays[m,w] for m in matches for w in weeks) )
examples/mp/jupyter/sports_scheduling.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Solve with Decision Optimization You will get the best solution found after n seconds, due to a time limit parameter.
mdl.print_information() assert mdl.solve(), "!!! Solve of the model fails" mdl.report()
examples/mp/jupyter/sports_scheduling.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Step 5: Investigate the solution and then run an example analysis Determine which of the scheduled games will be a replay of one of the last 10 Super Bowls.<br> We start by creating a pandas DataFrame that contains the year and teams who played the last 10 Super Bowls.
try: # Python 2 team_league = dict({t : team_div1[t] for t in range(nb_teams_in_division) }.items() + \ {t+nb_teams_in_division : team_div2[t] for t in range(nb_teams_in_division) }.items() ) except: # Python 3 team_league = dict(list({t : team_div1[t] for t in range(nb_teams_in_div...
examples/mp/jupyter/sports_scheduling.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
We now look for the games in our solution that are replays of one of the past 10 Super Bowls.
months = ["January", "February", "March", "April", "May", "June", "July", "August", "September", "October", "November", "December"] report = [] for m in solution: if (m.team1, m.team2) in nfl_meetings: report.append((m.week, months[m.week//4], m.team1, m.team2)) if (m.team2, m.team1) in nfl_m...
examples/mp/jupyter/sports_scheduling.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Problem definition The rampy.centroid function will calculate the centroid of the signal you provide to it. In this case, we have a combination of two Gaussian peaks with some noise. This example is that used in the Machine Learning Regression notebook. The example signals $D_{i,j}$ are generated from a linear combinat...
x = np.arange(0,600,1.0) nb_samples = 100 # number of samples in our dataset # partial spectra S_1 = scipy.stats.norm.pdf(x,loc=300.,scale=40.) S_2 = scipy.stats.norm.pdf(x,loc=400,scale=20) S_true = np.vstack((S_1,S_2)) print("Number of samples:"+str(nb_samples)) print("Shape of partial spectra matrix:"+str(S_true.sh...
examples/Baseline_and_Centroid_determination.ipynb
charlesll/RamPy
gpl-2.0
Baseline fit We will use the rampy.baseline function with a polynomial form. We first create the array to store baseline-corrected spectra
Obs_corr = np.ones(Obs.shape) print(Obs_corr.shape)
examples/Baseline_and_Centroid_determination.ipynb
charlesll/RamPy
gpl-2.0
We define regions of interest ROI where the baseline will fit the signals. From the previous figure, this is clear that it should be between 0 and 100, and 500 and 600.
ROI = np.array([[0.,100.],[500.,600.]]) print(ROI)
examples/Baseline_and_Centroid_determination.ipynb
charlesll/RamPy
gpl-2.0
Then we loop to save the baseline corrected data in this array.
for i in range(nb_samples): sig_corr, bas_, = rp.baseline(x,Obs[i,:].T,ROI,method="poly",polynomial_order=2) Obs_corr[i,:] = sig_corr.reshape(1,-1) # plotting spectra # calling the ScalarMappable that was initialised with c_m and norm plt.figure(figsize=(8,4)) plt.subplot(1,2,1) for i in range(C_.shape[0]):...
examples/Baseline_and_Centroid_determination.ipynb
charlesll/RamPy
gpl-2.0
Centroid determination Now we can calculate the centroid of the signal. rampy.centroid calculates it as centroid = np.sum(y_/np.sum(y_)*x) It accepts arrays of spectrum, organised as n points by m samples. Smoothing can be done if wanted, by indicating smoothing = True. We will compare both in the following code. A twe...
x_array = np.ones((len(x),nb_samples)) for i in range(nb_samples): x_array[:,i] = x centroids_no_smooth = rp.centroid(x_array,Obs_corr.T) centroids_smooth = rp.centroid(x_array,Obs_corr.T,smoothing=True) centroids_true_sig = rp.centroid(x_array,true_sig.T,smoothing=True)
examples/Baseline_and_Centroid_determination.ipynb
charlesll/RamPy
gpl-2.0
Now we can plot the centroids against the chemical ratio C_ for instance.
plt.figure() plt.plot(C_,centroids_true_sig,"r-",markersize=3.,label="true values") plt.plot(C_,centroids_no_smooth,"k.",markersize=5., label="non-smoothed centroids") plt.plot(C_,centroids_smooth,"b+",markersize=3., label="smoothed centroids") plt.xlabel("Fraction C_") plt.ylabel("Signal centroid") plt.legend()
examples/Baseline_and_Centroid_determination.ipynb
charlesll/RamPy
gpl-2.0
Since we're dealing with text, we need to turn the characters into numbers in order to perform our calculations on them. We do this in two steps: first we get the sparse (one-hot encoded) representation of each character and then we learn a dense representation (so-called embeddings) as part of our model training. Spar...
from utils import SentenceEncoder sents = ["Hello, world!", "Hi again!", "Bye bye now."] encoder = SentenceEncoder(sents, batch_size=2) for batch in encoder: seq = batch[0] print encoder.decode(seq) print seq print
text_data_representation.ipynb
KristianHolsheimer/tensorflow_training
gpl-3.0
ExerciseIn this exercise we're going to the functions that we just learned about to translate text into numeric input tensors. A) A simple character encoder. Using the examples above, write a simple encoder that takes the sentences python sents = ['Hello, world!', 'Bye bye.'] and returns both the encoded sentences.
# input sentences sents = ['Hello, world!', 'Bye bye.'] # this is the expected output out = [[ 73, 102, 109, 109, 112, 45, 33, 120, 112, 115, 109, 101, 34, 0], [ 67, 122, 102, 33, 99, 122, 102, 47, 0, 0, 0, 0, 0, 0]] def encode(sents): '<your code here>' print encode(sents) np.testin...
text_data_representation.ipynb
KristianHolsheimer/tensorflow_training
gpl-3.0
B) Get sparse representation. Create a one-hot encoded (sparse) representation of the sentences that we encoded above.
# clear any previous computation graphs tf.reset_default_graph() # dimensions n_chars = '<your code here>' batch_size = '<your code here>' max_seqlen = '<your code here>' # input placeholder sents_enc = '<your code here>' # sparse representation x_one_hot = '<your code here>' # input sents = ['Hello, world!', 'Bye ...
text_data_representation.ipynb
KristianHolsheimer/tensorflow_training
gpl-3.0
C) Get dense representation. Same as the previous exercise, except now use an embedding matrix to create a dense representation of the sentences.
# clear any previous computation graphs tf.reset_default_graph() # dimensions n_chars = '<your code here>' batch_size = '<your code here>' emb_dim = '<your code here>' max_seqlen = '<your code here>' # input placeholder sents_enc = '<your code here>' # character embeddings emb = '<your code here>' # dense represent...
text_data_representation.ipynb
KristianHolsheimer/tensorflow_training
gpl-3.0
If the iterables to be combined are not all known in advance, or need to be evaluated lazily, chain.from_iterable() can be used to construct the chain instead.
from itertools import * def make_iterables_to_chain(): yield [1, 2, 3] yield ['a', 'b', 'c'] for i in chain.from_iterable(make_iterables_to_chain()): print(i, end=' ') print()
algorithm/iterator.ipynb
scotthuang1989/Python-3-Module-of-the-Week
apache-2.0
The built-in function zip() returns an iterator that combines the elements of several iterators into tuples.
for i in zip([1, 2, 3], ['a', 'b', 'c']): print(i)
algorithm/iterator.ipynb
scotthuang1989/Python-3-Module-of-the-Week
apache-2.0
zip() stops when the first input iterator is exhausted. To process all of the inputs, even if the iterators produce different numbers of values, use zip_longest(). By default, zip_longest() substitutes None for any missing values. Use the fillvalue argument to use a different substitute value.
from itertools import * r1 = range(3) r2 = range(2) print('zip stops early:') print(list(zip(r1, r2))) r1 = range(3) r2 = range(2) print('\nzip_longest processes all of the values:') print(list(zip_longest(r1, r2)))
algorithm/iterator.ipynb
scotthuang1989/Python-3-Module-of-the-Week
apache-2.0
The islice() function returns an iterator which returns selected items from the input iterator, by index.
from itertools import * print('Stop at 5:') for i in islice(range(100), 5): print(i, end=' ') print('\n') print('Start at 5, Stop at 10:') for i in islice(range(100), 5, 10): print(i, end=' ') print('\n') print('By tens to 100:') for i in islice(range(100), 0, 100, 10): print(i, end=' ') print('\n')
algorithm/iterator.ipynb
scotthuang1989/Python-3-Module-of-the-Week
apache-2.0
The tee() function returns several independent iterators (defaults to 2) based on a single original input. tee() has semantics similar to the Unix tee utility, which repeats the values it reads from its input and writes them to a named file and standard output. The iterators returned by tee() can be used to feed the sa...
from itertools import * r = islice(count(), 5) i1, i2 = tee(r) print('i1:', list(i1)) print('i2:', list(i2))
algorithm/iterator.ipynb
scotthuang1989/Python-3-Module-of-the-Week
apache-2.0
The new iterators created by tee() share their input, so the original iterator should not be used after the new ones are created.
from itertools import * r = islice(count(), 5) i1, i2 = tee(r) print('r:', end=' ') for i in r: print(i, end=' ') if i > 1: break print() print('i1:', list(i1)) print('i2:', list(i2))
algorithm/iterator.ipynb
scotthuang1989/Python-3-Module-of-the-Week
apache-2.0
Converting Inputs The built-in map() function returns an iterator that calls a function on the values in the input iterators, and returns the results. It stops when any input iterator is exhausted.
def times_two(x): return 2 * x def multiply(x, y): return (x, y, x * y) print('Doubles:') for i in map(times_two, range(5)): print(i) print('\nMultiples:') r1 = range(5) r2 = range(5, 10) for i in map(multiply, r1, r2): print('{:d} * {:d} = {:d}'.format(*i)) print('\nStopping:') r1 = range(5) r2 =...
algorithm/iterator.ipynb
scotthuang1989/Python-3-Module-of-the-Week
apache-2.0
The starmap() function is similar to map(), but instead of constructing a tuple from multiple iterators, it splits up the items in a single iterator as arguments to the mapping function using the * syntax.
from itertools import * values = [(0, 5), (1, 6), (2, 7), (3, 8), (4, 9)] for i in starmap(lambda x, y: (x, y, x * y), values): print('{} * {} = {}'.format(*i))
algorithm/iterator.ipynb
scotthuang1989/Python-3-Module-of-the-Week
apache-2.0
Producing new values The count() function returns an iterator that produces consecutive integers, indefinitely. The first number can be passed as an argument (the default is zero). There is no upper bound argument (see the built-in range() for more control over the result set).
from itertools import * for i in zip(count(1), ['a', 'b', 'c']): print(i)
algorithm/iterator.ipynb
scotthuang1989/Python-3-Module-of-the-Week
apache-2.0
The start and step arguments to count() can be any numerical values that can be added together.
import fractions from itertools import * start = fractions.Fraction(1, 3) step = fractions.Fraction(1, 3) for i in zip(count(start, step), ['a', 'b', 'c']): print('{}: {}'.format(*i))
algorithm/iterator.ipynb
scotthuang1989/Python-3-Module-of-the-Week
apache-2.0
The cycle() function returns an iterator that repeats the contents of the arguments it is given indefinitely. Since it has to remember the entire contents of the input iterator, it may consume quite a bit of memory if the iterator is long.
from itertools import * for i in zip(range(7), cycle(['a', 'b', 'c'])): print(i)
algorithm/iterator.ipynb
scotthuang1989/Python-3-Module-of-the-Week
apache-2.0
The repeat() function returns an iterator that produces the same value each time it is accessed.
from itertools import * for i in repeat('over-and-over', 5): print(i)
algorithm/iterator.ipynb
scotthuang1989/Python-3-Module-of-the-Week
apache-2.0
Filtering The dropwhile() function returns an iterator that produces elements of the input iterator after a condition becomes false for the first time. dropwhile() does not filter every item of the input; after the condition is false the first time, all of the remaining items in the input are returned.
from itertools import * def should_drop(x): print('Testing:', x) return x < 1 for i in dropwhile(should_drop, [-1, 0, 1, 2, -2]): print('Yielding:', i)
algorithm/iterator.ipynb
scotthuang1989/Python-3-Module-of-the-Week
apache-2.0
The opposite of dropwhile() is takewhile(). It returns an iterator that returns items from the input iterator as long as the test function returns true.
from itertools import * def should_take(x): # print('Testing:', x) return x < 2 for i in takewhile(should_take, [-1, 0, 1, 2, -2]): print('Yielding:', i)
algorithm/iterator.ipynb
scotthuang1989/Python-3-Module-of-the-Week
apache-2.0
The built-in function filter() returns an iterator that includes only items for which the test function returns true. filter() is different from dropwhile() and takewhile() in that every item is tested before it is returned.
from itertools import * def check_item(x): print('Testing:', x) return x < 1 for i in filter(check_item, [-1, 0, 1, 2, -2]): print('Yielding:', i)
algorithm/iterator.ipynb
scotthuang1989/Python-3-Module-of-the-Week
apache-2.0
filterfalse() returns an iterator that includes only items where the test function returns false. compress() offers another way to filter the contents of an iterable. Instead of calling a function, it uses the values in another iterable to indicate when to accept a value and when to ignore it. The first argument is the...
from itertools import * every_third = cycle([False, False, True]) data = range(1, 10) for i in compress(data, every_third): print(i, end=' ') print()
algorithm/iterator.ipynb
scotthuang1989/Python-3-Module-of-the-Week
apache-2.0
Grouping Data The groupby() function returns an iterator that produces sets of values organized by a common key. This example illustrates grouping related values based on an attribute. The input sequence needs to be sorted on the key value in order for the groupings to work out as expected.
import functools from itertools import * import operator import pprint @functools.total_ordering class Point: def __init__(self, x, y): self.x = x self.y = y def __repr__(self): return '({}, {})'.format(self.x, self.y) def __eq__(self, other): return (self.x, self.y) == ...
algorithm/iterator.ipynb
scotthuang1989/Python-3-Module-of-the-Week
apache-2.0
Combining Inputs The accumulate() function processes the input iterable, passing the nth and n+1st item to a function and producing the return value instead of either input. The default function used to combine the two values adds them, so accumulate() can be used to produce the cumulative sum of a series of numerical ...
from itertools import * print(list(accumulate(range(5)))) print(list(accumulate('abcde')))
algorithm/iterator.ipynb
scotthuang1989/Python-3-Module-of-the-Week
apache-2.0
It is possible to combine accumulate() with any other function that takes two input values to achieve different results.
from itertools import * def f(a, b): print(a, b) return b + a + b print(list(accumulate('abcde', f)))
algorithm/iterator.ipynb
scotthuang1989/Python-3-Module-of-the-Week
apache-2.0
Nested for loops that iterate over multiple sequences can often be replaced with product(), which produces a single iterable whose values are the Cartesian product of the set of input values.
from itertools import * import pprint FACE_CARDS = ('J', 'Q', 'K', 'A') SUITS = ('H', 'D', 'C', 'S') DECK = list( product( chain(range(2, 11), FACE_CARDS), SUITS, ) ) for card in DECK: print('{:>2}{}'.format(*card), end=' ') if card[1] == SUITS[-1]: print()
algorithm/iterator.ipynb
scotthuang1989/Python-3-Module-of-the-Week
apache-2.0
The values produced by product() are tuples, with the members taken from each of the iterables passed in as arguments in the order they are passed. The first tuple returned includes the first value from each iterable. The last iterable passed to product() is processed first, followed by the next to last, and so on. The...
from itertools import * def show(iterable): first = None for i, item in enumerate(iterable, 1): if first != item[0]: if first is not None: print() first = item[0] print(''.join(item), end=' ') print() print('All permutations:\n') show(permutations(...
algorithm/iterator.ipynb
scotthuang1989/Python-3-Module-of-the-Week
apache-2.0
To limit the values to unique combinations rather than permutations, use combinations(). As long as the members of the input are unique, the output will not include any repeated values.
from itertools import * def show(iterable): first = None for i, item in enumerate(iterable, 1): if first != item[0]: if first is not None: print() first = item[0] print(''.join(item), end=' ') print() print('Unique pairs:\n') show(combinations('abc...
algorithm/iterator.ipynb
scotthuang1989/Python-3-Module-of-the-Week
apache-2.0
While combinations() does not repeat individual input elements, sometimes it is useful to consider combinations that do include repeated elements. For those cases, use combinations_with_replacement().
from itertools import * def show(iterable): first = None for i, item in enumerate(iterable, 1): if first != item[0]: if first is not None: print() first = item[0] print(''.join(item), end=' ') print() print('Unique pairs:\n') show(combinations_with...
algorithm/iterator.ipynb
scotthuang1989/Python-3-Module-of-the-Week
apache-2.0
Compute source space connectivity and visualize it using a circular graph This example computes the all-to-all connectivity between 68 regions in source space based on dSPM inverse solutions and a FreeSurfer cortical parcellation. The connectivity is visualized using a circular graph which is ordered based on the locat...
# Authors: Martin Luessi <mluessi@nmr.mgh.harvard.edu> # Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr> # Nicolas P. Rougier (graph code borrowed from his matplotlib gallery) # # License: BSD (3-clause) import numpy as np import matplotlib.pyplot as plt import mne from mne.datasets imp...
0.18/_downloads/d71abe904faddac1a89e44f2986e07fa/plot_mne_inverse_label_connectivity.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Get Mendeley API auth parameters First you will need to generate a client ID and client secret from there : http://dev.mendeley.com/myapps.html. Then, put your client ID in client secret here:
client_id = "1988" client_secret = "CXhCJQKZ8HUrtFtg"
Bibliography/PubPeer/notebook.ipynb
hadim/public_notebooks
mit
Note that they are my personal credentials and at the time you read it they will be obsolete. Now let's start the auth process to the Mendeley API.
redirect_uri = 'https://localhost' authorization_base_url = "https://api.mendeley.com/oauth/authorize" token_url = "https://api.mendeley.com/oauth/token" oauth = OAuth2Session(client_id, redirect_uri=redirect_uri, scope=['all']) authorization_url, state = oauth.authorization_url(authorization_base_url, ...
Bibliography/PubPeer/notebook.ipynb
hadim/public_notebooks
mit
Now paste the fallback url here :
authorization_code = "https://localhost/?code=6fBBP91iqtnu-xPdTlsqCDVroYA&state=3sX7ggAfEip4OxnGff9pOfYqNb1BTM"
Bibliography/PubPeer/notebook.ipynb
hadim/public_notebooks
mit
Authenticate
token = oauth.fetch_token(token_url, authorization_response=authorization_code, client_secret=client_secret) mendeley = Mendeley(client_id, client_secret, redirect_uri=redirect_uri) session = MendeleySession(mendeley, token=token)
Bibliography/PubPeer/notebook.ipynb
hadim/public_notebooks
mit
Iterate over all your articles and record them into a Pandas Dataframe
articles = [] all_documents = session.documents.list() for doc in tqdm.tqdm(session.documents.iter(), total=all_documents.count): if doc.identifiers: d = {} d['title'] = doc.title d['year'] = doc.year d['source'] = doc.source d['doi'] = doc.identifiers['doi'] if 'doi' i...
Bibliography/PubPeer/notebook.ipynb
hadim/public_notebooks
mit
Lets find matches with PubPeer
pd.options.mode.chained_assignment = None #articles.loc[0, 'doi'] = "10.5772/22496" articles['comments'] = 0 articles['comments_link'] = None old_n = -1 for i in range(1, 179): print(i) url = "http://api.pubpeer.com/v1/publications/dump/{page}?devkey=9bb8f08ebef172ec518f5a4504344ceb" r = requests.get...
Bibliography/PubPeer/notebook.ipynb
hadim/public_notebooks
mit
Normalize the data Now that you've loaded the training data, normalize the input so that it has a mean of 0 and a range between -0.5 and 0.5.
# TODO: Implement data normalization here. def normalize_color(image_data): """ Normalize the image data with Min-Max scaling to a range of [0.1, 0.9] :param image_data: The image data to be normalized :return: Normalized image data """ a = -0.5 b = +0.5 Xmin = 0.0 Xmax = 255.0 ...
Exercises/Term1/keras-lab/traffic-sign-classification-with-keras.ipynb
thomasantony/CarND-Projects
mit
Build a Two-Layer Feedfoward Network The code you've written so far is for data processing, not specific to Keras. Here you're going to build Keras-specific code. Build a two-layer feedforward neural network, with 128 neurons in the fully-connected hidden layer. To get started, review the Keras documentation about mod...
from keras.models import Sequential from keras.layers.core import Dense, Activation from keras.optimizers import Adam from keras.utils import np_utils # TODO: Build a two-layer feedforward neural network with Keras here. model = Sequential() model.add(Dense(128, input_shape=(flat_img_size,), name='hidden1')) model.add...
Exercises/Term1/keras-lab/traffic-sign-classification-with-keras.ipynb
thomasantony/CarND-Projects
mit
Train the Network Compile and train the network for 2 epochs. Use the adam optimizer, with categorical_crossentropy loss. Hint 1: In order to use categorical cross entropy, you will need to one-hot encode the labels. Hint 2: In order to pass the input images to the fully-connected hidden layer, you will need to reshape...
# One-Hot encode the labels Y_train = np_utils.to_categorical(y_train, n_classes) Y_test = np_utils.to_categorical(y_test, n_classes) # Reshape input for MLP X_train_mlp = X_train.reshape(-1, flat_img_size) X_test_mlp = X_test.reshape(-1, flat_img_size) model.compile(optimizer='adam', loss='categorical_crossentropy',...
Exercises/Term1/keras-lab/traffic-sign-classification-with-keras.ipynb
thomasantony/CarND-Projects
mit
Validate the Network Split the training data into a training and validation set. Measure the validation accuracy of the network after two training epochs. Hint: Use the train_test_split() method from scikit-learn.
# Get randomized datasets for training and validation X_train, X_val, Y_train, Y_val = train_test_split( X_train, Y_train, test_size=0.25, random_state=0xdeadbeef) X_val_mlp = X_val.reshape(-1, flat_img_size) print('Training features and labels randomized and split.') # STOP: Do not change the tests b...
Exercises/Term1/keras-lab/traffic-sign-classification-with-keras.ipynb
thomasantony/CarND-Projects
mit