markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Go ahead and try to create some new searches on your own from the parameter list. Please feel free to also try out the some of the same search on the Marvin-web Search page. Returning Bonus Parameters Often you want to run a query and see the value of parameters that you didn't explicitly search on. For instance, you w...
myquery5 = 'nsa.z > 0.1' bonusparams5 = ['cube.ra', 'cube.dec'] # bonusparams5 = 'cube.ra' # This works too q5 = Query(searchfilter=myquery5, returnparams=bonusparams5) r5 = q5.run() r5.results
docs/sphinx/jupyter/my-first-query.ipynb
bretthandrews/marvin
bsd-3-clause
Point Interpolation Compares different point interpolation approaches.
import cartopy.crs as ccrs import cartopy.feature as cfeature from matplotlib.colors import BoundaryNorm import matplotlib.pyplot as plt import numpy as np from metpy.cbook import get_test_data from metpy.interpolate import (interpolate_to_grid, remove_nan_observations, remove_repeat_coo...
v1.0/_downloads/c1a3b4ec1d09d4debc078297d433a9b2/Point_Interpolation.ipynb
metpy/MetPy
bsd-3-clause
Scipy.interpolate linear
gx, gy, img = interpolate_to_grid(x, y, temp, interp_type='linear', hres=75000) img = np.ma.masked_where(np.isnan(img), img) fig, view = basic_map(to_proj, 'Linear') mmb = view.pcolormesh(gx, gy, img, cmap=cmap, norm=norm) fig.colorbar(mmb, shrink=.4, pad=0, boundaries=levels)
v1.0/_downloads/c1a3b4ec1d09d4debc078297d433a9b2/Point_Interpolation.ipynb
metpy/MetPy
bsd-3-clause
Natural neighbor interpolation (MetPy implementation) Reference <https://github.com/Unidata/MetPy/files/138653/cwp-657.pdf>_
gx, gy, img = interpolate_to_grid(x, y, temp, interp_type='natural_neighbor', hres=75000) img = np.ma.masked_where(np.isnan(img), img) fig, view = basic_map(to_proj, 'Natural Neighbor') mmb = view.pcolormesh(gx, gy, img, cmap=cmap, norm=norm) fig.colorbar(mmb, shrink=.4, pad=0, boundaries=levels)
v1.0/_downloads/c1a3b4ec1d09d4debc078297d433a9b2/Point_Interpolation.ipynb
metpy/MetPy
bsd-3-clause
Cressman interpolation search_radius = 100 km grid resolution = 25 km min_neighbors = 1
gx, gy, img = interpolate_to_grid(x, y, temp, interp_type='cressman', minimum_neighbors=1, hres=75000, search_radius=100000) img = np.ma.masked_where(np.isnan(img), img) fig, view = basic_map(to_proj, 'Cressman') mmb = view.pcolormesh(gx, gy, img, cmap=cmap, norm=norm) fig.colorbar(mmb...
v1.0/_downloads/c1a3b4ec1d09d4debc078297d433a9b2/Point_Interpolation.ipynb
metpy/MetPy
bsd-3-clause
Barnes Interpolation search_radius = 100km min_neighbors = 3
gx, gy, img1 = interpolate_to_grid(x, y, temp, interp_type='barnes', hres=75000, search_radius=100000) img1 = np.ma.masked_where(np.isnan(img1), img1) fig, view = basic_map(to_proj, 'Barnes') mmb = view.pcolormesh(gx, gy, img1, cmap=cmap, norm=norm) fig.colorbar(mmb, shrink=.4, pad=0,...
v1.0/_downloads/c1a3b4ec1d09d4debc078297d433a9b2/Point_Interpolation.ipynb
metpy/MetPy
bsd-3-clause
Radial basis function interpolation linear
gx, gy, img = interpolate_to_grid(x, y, temp, interp_type='rbf', hres=75000, rbf_func='linear', rbf_smooth=0) img = np.ma.masked_where(np.isnan(img), img) fig, view = basic_map(to_proj, 'Radial Basis Function') mmb = view.pcolormesh(gx, gy, img, cmap=cmap, norm=norm) fig.colorbar(mmb, ...
v1.0/_downloads/c1a3b4ec1d09d4debc078297d433a9b2/Point_Interpolation.ipynb
metpy/MetPy
bsd-3-clause
Setup
!pip install --upgrade jax jaxlib # Install jax-dft !git clone https://github.com/google-research/google-research.git !pip install google-research/jax_dft import jax from jax.config import config import jax.numpy as jnp from jax_dft import scf from jax_dft import utils import matplotlib.pyplot as plt import numpy as ...
jax_dft/examples/recover_potential_from_density_and_energy.ipynb
google-research/google-research
apache-2.0
Run Define grids
grids = np.linspace(-5, 5, 201) dx = utils.get_dx(grids)
jax_dft/examples/recover_potential_from_density_and_energy.ipynb
google-research/google-research
apache-2.0
Quantum Harmonic Oscillator $v(x)=\frac{1}{2}k x^2$, where $k=1$. The ground state energy is $0.5$ Hartree.
qho_potential = 0.5 * grids ** 2 qho_density, qho_energy, _ = ( scf.solve_noninteracting_system( qho_potential, num_electrons=1, grids=grids)) print(f'total energy: {qho_energy}') show_density_potential(grids, qho_density, qho_potential, grey=True)
jax_dft/examples/recover_potential_from_density_and_energy.ipynb
google-research/google-research
apache-2.0
Perturbed Quantum Harmonic Oscillator Let's add a perturbation on the potential. We will see that the density is of course not the original density.
perturbed_potential = qho_potential + np.exp(-(grids - 0.5) ** 2 / 0.04) perturbed_density, perturbed_energy, _ = ( scf.solve_noninteracting_system( perturbed_potential, num_electrons=1, grids=grids)) print(f'total energy: {perturbed_energy}') _, axs = plt.subplots(nrows=2) show_density_pot...
jax_dft/examples/recover_potential_from_density_and_energy.ipynb
google-research/google-research
apache-2.0
Adjust potential from loss $L=\int (n - n_\mathrm{QHO})^2 dx + (E - E_\mathrm{QHO})^2$
# Note the use of `jnp` not `np` here. def density_loss(output, target): return jnp.sum((output - target) ** 2) * dx def energy_loss(output, target): return (output - target) ** 2 print(f'Current density loss {density_loss(perturbed_density, qho_density)}') print(f'Current energy loss {energy_loss(perturbed_energ...
jax_dft/examples/recover_potential_from_density_and_energy.ipynb
google-research/google-research
apache-2.0
You can get the gradient $\frac{\partial L_n}{\partial v}$ via automatic differentiation from jax.grad
grad_fn = jax.jit(jax.grad(loss_fn)) # Compile with jit for fast grad. plt.plot(grids, grad_fn(perturbed_potential), '--', c=COLORS[2]) plt.xlabel(r'$x$') plt.ylabel(r'$\frac{\partial L_n}{\partial v}$') plt.show()
jax_dft/examples/recover_potential_from_density_and_energy.ipynb
google-research/google-research
apache-2.0
Now we have the gradient. Let's update the potential from the graident of loss with respect to the potential. $$v\leftarrow v - \epsilon\frac{\partial L}{\partial v}$$
potential = perturbed_potential loss_history = [] potential_history = [] record_interval = 1000 for i in range(5001): if i % record_interval == 0: loss_value = loss_fn(potential) print(f'step {i}, loss {loss_value}') loss_history.append(loss_value) potential_history.append(potential) potential -= 3...
jax_dft/examples/recover_potential_from_density_and_energy.ipynb
google-research/google-research
apache-2.0
Visualize the learning curve
history_size = len(loss_history) plt.plot(np.arange(history_size) * record_interval, loss_history) plt.axhline(y=0, color='0.5', ls='--') plt.xlabel('step') plt.ylabel(r'$L$') plt.show()
jax_dft/examples/recover_potential_from_density_and_energy.ipynb
google-research/google-research
apache-2.0
and how the potential and corresponding density change.
_, axs = plt.subplots( nrows=2, ncols=history_size, figsize=(2.5 * history_size, 4), sharex=True, sharey='row') for i, ax in enumerate(axs[0]): ax.plot(grids, qho_density, c='0.5') density, _, _ = scf.solve_noninteracting_system( potential_history[i], num_electrons=1, grids=grids) ax.plot(grids, den...
jax_dft/examples/recover_potential_from_density_and_energy.ipynb
google-research/google-research
apache-2.0
Visualize the final result.
optimized_potential = potential_history[-1] optimized_density, optimized_total_eigen_energies, _ = ( scf.solve_noninteracting_system( optimized_potential, num_electrons=1, grids=grids)) print(f'total energy: {optimized_total_eigen_energies}') _, axs = plt.subplots(nrows=2) axs[0].plot(grid...
jax_dft/examples/recover_potential_from_density_and_energy.ipynb
google-research/google-research
apache-2.0
Model summary
print('## Model structure summary\n') print(model) params = model.get_params() n_params = {p.name : p.get_value().size for p in params} total_params = sum(n_params.values()) print('\n## Number of parameters\n') print(' ' + '\n '.join(['{0} : {1} ({2:.1f}%)'.format(k, v, 100.*v/total_params) ...
notebooks/Alexnet based 40 aug visualisation.ipynb
Neuroglycerin/neukrill-net-work
mit
Train and valid set NLL trace
tr = np.array(model.monitor.channels['valid_y_nll'].time_record) / 3600. fig = plt.figure(figsize=(12,8)) ax1 = fig.add_subplot(111) ax1.plot(model.monitor.channels['valid_y_nll'].val_record) ax1.plot(model.monitor.channels['train_y_nll'].val_record) ax1.set_xlabel('Epochs') ax1.legend(['Valid', 'Train']) ax1.set_ylabe...
notebooks/Alexnet based 40 aug visualisation.ipynb
Neuroglycerin/neukrill-net-work
mit
Visualising first layer weights
pv = get_weights_report(model=model) w_img = pv.get_img() w_img = w_img.resize((WEIGHT_IMAGE_SCALE*w_img.size[0], WEIGHT_IMAGE_SCALE*w_img.size[1])) w_img_data = io.BytesIO() w_img.save(w_img_data, format='png') display(Image(data=w_img_data.getvalue(), format='png'))
notebooks/Alexnet based 40 aug visualisation.ipynb
Neuroglycerin/neukrill-net-work
mit
Visualising activitations for example test images Plot an example image to check loaded correctly
plt.imshow(dataset.X[0])
notebooks/Alexnet based 40 aug visualisation.ipynb
Neuroglycerin/neukrill-net-work
mit
Compile theano function for forward propagating through network and getting all layer activations
X = model.get_input_space().make_theano_batch() Y = model.fprop( X, True ) model_activ_func = th.function([X], Y) test_idx = prng.choice(len(dataset.X), N_TEST_IMS, False) input_arrs = np.array([dataset.X[i].astype(np.float32).reshape(input_height, input_width, 1) for i in test_idx]) input_arrs = (input_arrs - normali...
notebooks/Alexnet based 40 aug visualisation.ipynb
Neuroglycerin/neukrill-net-work
mit
First let's load our data. In the VoxforgeDataPrep notebook, we created to arrays - inputs and outputs. The input nas the dimensions (num_samples,num_features) and the output is simply 1D vector of ints of length (num_samples). In this step, we split the training data into actual training (90%) and dev (10%) and merge ...
import sys sys.path.append('../python') from data import Corpus with Corpus('../data/mfcc_train_small.hdf5',load_normalized=True,merge_utts=True) as corp: train,dev=corp.split(0.9) test=Corpus('../data/mfcc_test.hdf5',load_normalized=True,merge_utts=True) tr_in,tr_out_dec=train.get() dev_in,dev_out_dec=dev...
notebooks/MLP_Keras.ipynb
danijel3/ASRDemos
apache-2.0
Next we define some constants for our program. Input and output dimensions can be inferred from the data, but the hidden layer size has to be defined manually. We also redefine our outputs as a 1-of-N matrix instead of an int vector. The old outputs were simply a list of integers (from 0 to 39) defining the phoneme (as...
input_dim=tr_in.shape[1] output_dim=np.max(tr_out_dec)+1 hidden_num=256 batch_size=256 epoch_num=100 def dec2onehot(dec): num=dec.shape[0] ret=np.zeros((num,output_dim)) ret[range(0,num),dec]=1 return ret tr_out=dec2onehot(tr_out_dec) dev_out=dec2onehot(dev_out_dec) tst_out=dec2onehot(tst_out_dec) ...
notebooks/MLP_Keras.ipynb
danijel3/ASRDemos
apache-2.0
Model definition Here we define our model using the Keras interface. There are two main model types in Keras: sequential and graph. Sequential is much more common and easy to use, so we start with that. Next we define the MLP topology. Here we have 3 layers: input, hidden and output. They are interconnected with two se...
model = Sequential() model.add(Dense(input_dim=input_dim,output_dim=hidden_num)) model.add(Activation('sigmoid')) model.add(Dense(output_dim=output_dim)) model.add(Activation('softmax')) #optimizer = SGD(lr=0.01, momentum=0.9, nesterov=True) optimizer= Adadelta() loss='categorical_crossentropy'
notebooks/MLP_Keras.ipynb
danijel3/ASRDemos
apache-2.0
After defining the model and all its parameters, we can compile it. This literally means compiling, because the model is converted into C++ code in the background and compiled with lots of optimizations to work as efficiently as possible. The process can take a while, but is worth the added speed in training.
model.compile(loss=loss, optimizer=optimizer) print model.summary()
notebooks/MLP_Keras.ipynb
danijel3/ASRDemos
apache-2.0
We can also try and visualize the model using the builtin Dot painter:
from keras.utils import visualize_util from IPython.display import SVG SVG(visualize_util.to_graph(model,show_shape=True).create(prog='dot', format='svg'))
notebooks/MLP_Keras.ipynb
danijel3/ASRDemos
apache-2.0
Finally, we can start training the model. We provide the training function both training and validation data and define a few parameters: batch size and number of training epochs. Changing the batch size can affect both the training speed and final accuracy. This value is also closely related to the number of epochs. G...
val=(dev_in,dev_out) hist=model.fit(tr_in, tr_out, shuffle=True, batch_size=batch_size, nb_epoch=epoch_num, verbose=0, validation_data=val)
notebooks/MLP_Keras.ipynb
danijel3/ASRDemos
apache-2.0
The training method returns an object that contains the trained model parameters and the training history:
import matplotlib.pyplot as P %matplotlib inline P.plot(hist.history['loss'])
notebooks/MLP_Keras.ipynb
danijel3/ASRDemos
apache-2.0
You can get better graphs and more data if you overload the training callback method, which will provide you with the model parameters after each epoch during training. After the model is trained, we can easily test it using the evaluate method. The show_accuracy argument is required to compute the accuracy of the deci...
res=model.evaluate(tst_in,tst_out,batch_size=batch_size,show_accuracy=True,verbose=0) print 'Loss: {}'.format(res[0]) print 'Accuracy: {:%}'.format(res[1])
notebooks/MLP_Keras.ipynb
danijel3/ASRDemos
apache-2.0
One other way to look at this is to check where the errors occur by looking at what's known as the confusion matrix. The confusion matrix counts the number of predicted outputs with respect on how they should have been predicted. All the values on the diagonal (so where the predicted class is equal to the reference) ar...
out = model.predict_classes(tst_in,batch_size=256,verbose=0) confusion=np.zeros((output_dim,output_dim)) for s in range(len(out)): confusion[out[s],tst_out_dec[s]]+=1 #normalize by class - because some classes occur much more often than others for c in range(output_dim): confusion[c,:]/=np.sum(confusion[c,:])...
notebooks/MLP_Keras.ipynb
danijel3/ASRDemos
apache-2.0
Some info about the attributes MSSubClass: Identifies the type of dwelling involved in the sale. 20 1-STORY 1946 & NEWER ALL STYLES 30 1-STORY 1945 & OLDER 40 1-STORY W/FINISHED ATTIC ALL AGES 45 1-1/2 STORY - UNFINISHED ALL AGES 50 1-1/2 STORY FINISHED ALL AGES 60 2-STORY 1946 ...
import matplotlib.pyplot as plt %matplotlib inline import seaborn as sns
Module-04a-Regression-Basic.ipynb
amitkaps/applied-machine-learning
mit
Sale Price
sns.distplot(df_train["SalePrice"]);
Module-04a-Regression-Basic.ipynb
amitkaps/applied-machine-learning
mit
grlivarea vs Sale Price
df_train.plot.scatter(x="GrLivArea", y="SalePrice")
Module-04a-Regression-Basic.ipynb
amitkaps/applied-machine-learning
mit
TotalBsmtSF vs Sale Price
df_train.plot.scatter(x="TotalBsmtSF", y="SalePrice")
Module-04a-Regression-Basic.ipynb
amitkaps/applied-machine-learning
mit
box plot overallqual/saleprice
var = 'OverallQual' data = pd.concat([df_train['SalePrice'], df_train[var]], axis=1) f, ax = plt.subplots(figsize=(8, 6)) fig = sns.boxplot(x=var, y="SalePrice", data=data) fig.axis(ymin=0, ymax=800000);
Module-04a-Regression-Basic.ipynb
amitkaps/applied-machine-learning
mit
box plot year built / saleprice
var = 'YearBuilt' data = pd.concat([df_train['SalePrice'], df_train[var]], axis=1) f, ax = plt.subplots(figsize=(16, 8)) fig = sns.boxplot(x=var, y="SalePrice", data=data) fig.axis(ymin=0, ymax=800000); plt.xticks(rotation=90);
Module-04a-Regression-Basic.ipynb
amitkaps/applied-machine-learning
mit
correlation matrix
corrmat = df_train.corr() f, ax = plt.subplots(figsize=(12, 9)) sns.heatmap(corrmat, vmax=.8, square=True);
Module-04a-Regression-Basic.ipynb
amitkaps/applied-machine-learning
mit
scatterplot with highly correlated features
sns.set() cols = ['SalePrice', 'OverallQual', 'GrLivArea', 'GarageCars', 'TotalBsmtSF', 'FullBath', 'YearBuilt'] sns.pairplot(df_train[cols], height = 2.5); plt.show();
Module-04a-Regression-Basic.ipynb
amitkaps/applied-machine-learning
mit
Missing Data
missing_features = df_train.isnull().sum() missing_features[missing_features>0] df_train.shape
Module-04a-Regression-Basic.ipynb
amitkaps/applied-machine-learning
mit
Recalling the mechanics of file I/O from the previous lecture, you'll see we opened up a file descriptor to alice.txt and read the whole file in a single go, storing all the text as a single string book. We then closed the file descriptor and printed out the first line (or first 71 characters), while wrapping the entir...
print(type(book)) lines = book.split("\n") # Split the string. Where should the splits happen? On newline characters, of course. print(type(lines))
lectures/L19.ipynb
eds-uga/csci1360-fa16
mit
voilà! lines is now a list of strings.
print(len(lines))
lectures/L19.ipynb
eds-uga/csci1360-fa16
mit
...a list of over 3,700 lines of text, no less o_O Newline characters Let's go over this point in a little more detail. A "newline" character is an actual character--like "a" or "b" or "1" or ":"--that represents pressing the "enter" key. However, like tabs and spaces, this character falls under the category of a "whit...
sentences = book.split(".") print(sentences[0])
lectures/L19.ipynb
eds-uga/csci1360-fa16
mit
You can already see some problems with this approach: not all sentences end with periods. Sure, you could split things again on question marks and exclamation points, but this still wouldn't tease out the case of the title--which has NO punctuation to speak of!--and doesn't account for important literary devices like s...
print("Even though there's no newline in the string I wrote, Python's print function still adds one.") print() # Blank line! print("There's a blank line above.")
lectures/L19.ipynb
eds-uga/csci1360-fa16
mit
This is fine for 99% of cases, except when the string already happens to have a newline at the end.
print("Here's a string with an explicit newline --> \n") print() print("Now there are TWO blank lines above!")
lectures/L19.ipynb
eds-uga/csci1360-fa16
mit
"But wait!" you say again, "You read in the text file and split it on newlines a few slides ago, but when you printed out the first line, there was no extra blank line underneath! Why did that work today but not in previous lectures?" An excellent question. It has to do with the approach we took. Previously, we used th...
readlines = None try: with open("alice.txt", "r") as f: readlines = f.readlines() except: print("Something went wrong.") print(readlines[0]) print(readlines[2]) print("There are blank lines because of the trailing newline characters.")
lectures/L19.ipynb
eds-uga/csci1360-fa16
mit
On the other hand, when you call split() on a string, it not only identifies all the instances of the character you specify as the endpoints of each successive list, but it also removes those characters from the ensuing lists.
print(readlines[0]) # This used readlines(), so it STILL HAS trailing newlines. print(lines[0]) # This used split(), so the newlines were REMOVED. print("No trailing newline when using split()!")
lectures/L19.ipynb
eds-uga/csci1360-fa16
mit
Is this getting confusing? If so, just remember the following: In general, make liberal use of the strip() function for strings you read in from files. This function strips (hence, the name) any whitespace off the front AND end of a string. So in the following example:
trailing_whitespace = " \t this is the important part \n \n \t " no_whitespace = trailing_whitespace.strip() print("Border --> |{}| <-- Border".format(no_whitespace))
lectures/L19.ipynb
eds-uga/csci1360-fa16
mit
All the pesky spaces, tabs, and newlines have been stripped off the string. This is extremely useful and pretty much a must when you're preprocessing text. Capitalization This is one of those insidious that seems like such a tiny detail but can radically alter your analysis if left unnoticed: developing a strategy for ...
print(lines[410]) print(lines[411])
lectures/L19.ipynb
eds-uga/csci1360-fa16
mit
You'll notice the word "and" appears twice: once at the beginning of the sentence in line 410, and again in the middle of the sentence in line 411. It's the same word, but given their difference in capitalization, it's entirely likely that your analysis framework would treat those as two separate words. After all, "and...
print(lines[0]) title = lines[0].lower() print(title)
lectures/L19.ipynb
eds-uga/csci1360-fa16
mit
Now everything is, in some sense, "equivalent." Part 2: The "Bag of Words" The "bag of words" model is one of the most popular ways of representing a large collection of text, and one of the easiest ways to structure text. The "bag of words" on display on the 8th floor of the Computer Science building at Carnegie Mello...
from collections import defaultdict word_counts = defaultdict(int) # All values are integers.
lectures/L19.ipynb
eds-uga/csci1360-fa16
mit
It otherwise behaves exactly like a regular Python dictionary, except we won't get a KeyError if we reference a key that doesn't exist; instead, a new key will be automatically created and a default value set. For the int type, this default value is 0. Next, we'll iterate through the lines of the book. There are a coup...
for line in lines: # Iterate through the lines of the book words = line.split() # If you don't give split() any arguments, the *default* split character is any whitespace. for word in words: w = word.lower() # Convert to lowercase. word_counts[w] += 1 # Add 1 to the count for that word in ou...
lectures/L19.ipynb
eds-uga/csci1360-fa16
mit
Let's take a look at what we have! First, we'll count how many unique words there are.
print("Unique words: {}".format(len(word_counts.keys())))
lectures/L19.ipynb
eds-uga/csci1360-fa16
mit
Next, we'll count the total number of words in the book.
print("Total words: {}".format(sum(word_counts.values())))
lectures/L19.ipynb
eds-uga/csci1360-fa16
mit
Now we'll find the word that occurred most often:
maxcount = -1 maxitem = None for k, v in word_counts.items(): if v > maxcount: maxcount = v maxitem = k print("'{}' occurred most often ({} times).".format(maxitem, maxcount))
lectures/L19.ipynb
eds-uga/csci1360-fa16
mit
Well, there's a shocker. /sarcasm Python has another incredibly useful utility class for whenever we're counting things: a Counter! This will let us easily find the n words with the highest counts.
from collections import Counter counts = Counter(word_counts) print(counts.most_common(20)) # Find the 20 words with the highest counts!
lectures/L19.ipynb
eds-uga/csci1360-fa16
mit
Pretty boring, right? Most of these words are referred to as stop words, or words that used pretty much in every context and therefore don't tell you anything particularly interesting. They're usually filtered out, but because of some interesting corner cases, there's no universal "stop word list"; it's generally up to...
print("Here's the notation --> {}".format("another string"))
lectures/L19.ipynb
eds-uga/csci1360-fa16
mit
By using the curly braces {} inside the string, I've effectively created a placeholder for another string to be inserted. That other string is the argument(s) to the format() function. But there's a lot more to the curly braces than just {}. The simplest is just using the curly braces and nothing else. If you specify m...
print("{}, {}, and {}".format("a", "b", "c"))
lectures/L19.ipynb
eds-uga/csci1360-fa16
mit
Alternatively, you can specify the indices of the format() arguments inside the curly braces:
print("{0}, {2}, and {1}".format("a", "b", "c"))
lectures/L19.ipynb
eds-uga/csci1360-fa16
mit
Notice the 2nd and 3rd arguments were flipped in their final ordering! You can even provide arbitrary named arguments inside the curly braces, which format() will then expect.
print("{first_arg}, {second_arg}, and {third_arg}".format(second_arg = "b", first_arg = "a", third_arg = "c"))
lectures/L19.ipynb
eds-uga/csci1360-fa16
mit
Leading zeros and decimal precision You can also use this same syntax to specify leading zeros and decimal precision, but the notation gets a little more complicated. You'll need to first enter a colon ":", followed by the number 0, followed by the number of places that should be counted:
print("One leading zero: {:02}".format(1)) print("Two leading zeros: {:03}".format(1)) print("One leading zero: {:04}".format(100)) print("Two leading zeros: {:05}".format(100))
lectures/L19.ipynb
eds-uga/csci1360-fa16
mit
Decimal precision is very similar, but instead of a 0, you'll specify a decimal point "." followed by the level of precision you want (a number), followed by the letter "f" to signify that it's a floating-point:
import numpy as np print("Unformatted: {}".format(np.pi)) print("Two decimal places: {:.2f}".format(np.pi))
lectures/L19.ipynb
eds-uga/csci1360-fa16
mit
Finally, you can also include the comma in large numbers so you can actually read them more easily:
big_number = 98483745834 print("Big number: {}".format(big_number)) print("Big number with commas: {:,}".format(big_number))
lectures/L19.ipynb
eds-uga/csci1360-fa16
mit
Additional string functions There is an entire ecosystem of Python string functions that I highly encourage you to investigate, but I'll go over a few of the most common here. upper() and lower(): we've seen the latter already, but the former can be just as useful. count() will give you the number of times a substring ...
print("'Wonderland' occurs {} times.".format(book.count("Wonderland")))
lectures/L19.ipynb
eds-uga/csci1360-fa16
mit
What if you need to find the actual location in a string of that substring? As in, where is "Wonderland" first mentioned in the book? find() to the rescue!
print("'Wonderland' is first found {} characters in.".format(book.find("Wonderland")))
lectures/L19.ipynb
eds-uga/csci1360-fa16
mit
...well, that's embarrassing; that's probably the "Wonderland" that's in the book title. How about the second occurrence, then? We can use the index of the first one to tell find() that we want to start looking from there.
print("'Wonderland' is first found {} characters in.".format(book.find("Wonderland", 43 + 1)))
lectures/L19.ipynb
eds-uga/csci1360-fa16
mit
Now, I've decided I don't want this book to be Alice in Wonderland, but rather Alice in Las Vegas! How can I make this happen? replace()!
my_book = book.replace("Wonderland", "Las Vegas") print(my_book[:71])
lectures/L19.ipynb
eds-uga/csci1360-fa16
mit
I wonder if Alice will be Ocean's 14th? Two more very useful string functions are startswith() and endswith(). These are great if you're testing for leading or trailing characters or words.
print(lines[8]) print(lines[8].startswith("Title")) print(lines[8].endswith("Wonderland"))
lectures/L19.ipynb
eds-uga/csci1360-fa16
mit
Finally, the join() method. This is a little tricky to use, but insanely useful. It's cropped up on a couple previous assignments. You'll want to use this method whenever you have a list of strings that you want to "glue" together into a single string. Perhaps you have a list of words and want to put them back together...
words = lines[8].split(" ") print(words)
lectures/L19.ipynb
eds-uga/csci1360-fa16
mit
We can do this by specifying first the character we want to put in between all the words we're joining--in this case, just a space character--then calling join() on that character, and passing in the list of words we want to glue together as the argument to the function.
between_char = " " sentence = between_char.join(words) print(sentence)
lectures/L19.ipynb
eds-uga/csci1360-fa16
mit
Exercice Robozzle : nous avons ensuite résolu l'exercice Robozzle n°656, l'objectif était de vous faire comprendre le fonctionnement de la pile d'appels récursifs. http://robozzle.com/js/play.aspx?puzzle=656
def fact(n): """ :entrée n: int :pré-cond: n > 0 :sortie f: int :post-cond: f = n * (n-1) * ... * 1 """ if n == 1: f = 1 else: f = fact(n-1)*n print("--- fact({}) = {}".format(n,f)) return f print(fact(6))
2015-12-02 - TD15 - Révisions sur la récursivité.ipynb
ameliecordier/iutdoua-info_algo2015
cc0-1.0
Compute iterative reweighted TF-MxNE with multiscale time-frequency dictionary The iterative reweighted TF-MxNE solver is a distributed inverse method based on the TF-MxNE solver, which promotes focal (sparse) sources :footcite:StrohmeierEtAl2015. The benefits of this approach are that: it is spatio-temporal without a...
# Author: Mathurin Massias <mathurin.massias@gmail.com> # Yousra Bekhti <yousra.bekhti@gmail.com> # Daniel Strohmeier <daniel.strohmeier@tu-ilmenau.de> # Alexandre Gramfort <alexandre.gramfort@inria.fr> # # License: BSD-3-Clause import os.path as op import mne from mne.datasets import somato f...
dev/_downloads/0a1bad60270bfbdeeea274fcca0015d2/multidict_reweighted_tfmxne.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Load somatosensory MEG data
data_path = somato.data_path() subject = '01' task = 'somato' raw_fname = op.join(data_path, 'sub-{}'.format(subject), 'meg', 'sub-{}_task-{}_meg.fif'.format(subject, task)) fwd_fname = op.join(data_path, 'derivatives', 'sub-{}'.format(subject), 'sub-{}_task-{}-fwd.fif'.format(su...
dev/_downloads/0a1bad60270bfbdeeea274fcca0015d2/multidict_reweighted_tfmxne.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Run iterative reweighted multidict TF-MxNE solver
alpha, l1_ratio = 20, 0.05 loose, depth = 0.9, 1. # Use a multiscale time-frequency dictionary wsize, tstep = [4, 16], [2, 4] n_tfmxne_iter = 10 # Compute TF-MxNE inverse solution with dipole output dipoles, residual = tf_mixed_norm( evoked, forward, cov, alpha=alpha, l1_ratio=l1_ratio, n_tfmxne_iter=n_tfmxne...
dev/_downloads/0a1bad60270bfbdeeea274fcca0015d2/multidict_reweighted_tfmxne.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Generate stc from dipoles
stc = make_stc_from_dipoles(dipoles, forward['src']) plot_sparse_source_estimates( forward['src'], stc, bgcolor=(1, 1, 1), opacity=0.1, fig_name=f"irTF-MxNE (cond {evoked.comment})")
dev/_downloads/0a1bad60270bfbdeeea274fcca0015d2/multidict_reweighted_tfmxne.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Show the evoked response and the residual for gradiometers
ylim = dict(grad=[-300, 300]) evoked.copy().pick_types(meg='grad').plot( titles=dict(grad='Evoked Response: Gradiometers'), ylim=ylim) residual.copy().pick_types(meg='grad').plot( titles=dict(grad='Residuals: Gradiometers'), ylim=ylim)
dev/_downloads/0a1bad60270bfbdeeea274fcca0015d2/multidict_reweighted_tfmxne.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
1. Instantiate and reset LED Bar
from pynq.iop import Grove_LEDbar from pynq.iop import ARDUINO from pynq.iop import ARDUINO_GROVE_G4 # Instantiate Grove LED Bar on Arduino shield G4 ledbar = Grove_LEDbar(ARDUINO,ARDUINO_GROVE_G4) ledbar.reset()
Pynq-Z1/notebooks/examples/arduino_grove_ledbar.ipynb
VectorBlox/PYNQ
bsd-3-clause
2. Turn individual LEDs on or off Write a 10-bit binary pattern, with each bit representing the corresponding LED. 1 = on, 0 = off
from time import sleep # Light up different bars in a loop for i in range(2): ledbar.write_binary(0b1010100000) sleep(0.5) ledbar.write_binary(0b0000100100) sleep(0.5) ledbar.write_binary(0b1010101110) sleep(0.5) ledbar.write_binary(0b1111111110) sleep(0.5)
Pynq-Z1/notebooks/examples/arduino_grove_ledbar.ipynb
VectorBlox/PYNQ
bsd-3-clause
3. Set LEDs individually with different brightness levels The brightness of each LED can be set individually by writing a list of 10x 8-bit values to the LED bar. 0 is off, 0xff is full brightness.
# Brightness 0-255 HIGH = 0xFF MED = 0xAA LOW = 0x01 OFF = 0X00 brightness = [OFF, OFF, OFF, LOW, LOW, MED, MED, HIGH, HIGH, HIGH] ledbar.write_brightness(0b1111111111,brightness)
Pynq-Z1/notebooks/examples/arduino_grove_ledbar.ipynb
VectorBlox/PYNQ
bsd-3-clause
4. Set the "level" or the number of LEDs which are set A number or level of LEDs can be turned on, started from either end of the LED bar. For example, this feature could be used to indicate the level of something being measured. write_level(level, bright_level, green_to_red) level is the number of LEDs that are on. ...
for i in range (1,11): ledbar.write_level(i,3,0) sleep(0.3) for i in range (1,10): ledbar.write_level(i,3,1) sleep(0.3)
Pynq-Z1/notebooks/examples/arduino_grove_ledbar.ipynb
VectorBlox/PYNQ
bsd-3-clause
5. Controlling the LED Bar from the board buttons This cell demonstrates controlling the "level" of the LEDs from onboard buttons. Button 0 to increase level Button 1 to decrease level Button 3 to exit
from pynq.board import Button btns = [Button(index) for index in range(4)] i = 1 ledbar.reset() done = False while not done: if (btns[0].read()==1): sleep(0.2) ledbar.write_level(i,2,1) i = min(i+1,9) elif (btns[1].read()==1): sleep(0.2) i = max(i-1,0) ledbar.w...
Pynq-Z1/notebooks/examples/arduino_grove_ledbar.ipynb
VectorBlox/PYNQ
bsd-3-clause
Or, run code in parallel mode (command may need to be customized, depending your on MPI installation.)
!mpirun -n 4 filament
applications/clawpack/advection/2d/filament/filament.ipynb
ForestClaw/forestclaw
bsd-2-clause
The Alien Blaster problem In preparation for an alien invasion, the Earth Defense League (EDL) has been working on new missiles to shoot down space invaders. Of course, some missile designs are better than others; let's assume that each design has some probability of hitting an alien ship, x. Based on previous tests, ...
# Solution # Here's the prior prior = Beta(5, 10) thinkplot.Pdf(prior.MakePmf()) thinkplot.decorate(xlabel='Probability of hit', ylabel='PMF') prior.Mean() # Solution # And here's the likelhood function from scipy.stats import binom class AlienBlaster(Suite): def Likelihood(self, data,...
examples/blaster_soln.ipynb
AllenDowney/ThinkBayes2
mit
Part Two Suppose we have we have a stockpile of 3 Alien Blaster 9000s and 7 Alien Blaster 10Ks. After extensive testing, we have concluded that the AB9000 hits the target 30% of the time, precisely, and the AB10K hits the target 40% of the time. If I grab a random weapon from the stockpile and shoot at 10 targets, wha...
k = 3 n = 10 x1 = 0.3 x2 = 0.4 0.3 * binom.pmf(k, n, x1) + 0.7 * binom.pmf(k, n, x2)
examples/blaster_soln.ipynb
AllenDowney/ThinkBayes2
mit
The answer is a value drawn from the mixture of the two distributions. Continuing the previous problem, let's estimate the distribution of k, the number of successful shots out of 10. Write a few lines of Python code to simulate choosing a random weapon and firing it. Write a loop that simulates the scenario and ...
def flip(p): return np.random.random() < p def simulate_shots(n, p): return np.random.binomial(n, p) ks = [] for i in range(1000): if flip(0.3): k = simulate_shots(n, x1) else: k = simulate_shots(n, x2) ks.append(k)
examples/blaster_soln.ipynb
AllenDowney/ThinkBayes2
mit
Here's what the distribution looks like.
pmf = Pmf(ks) thinkplot.Hist(pmf) thinkplot.decorate(xlabel='Number of hits', ylabel='PMF') len(ks), np.mean(ks)
examples/blaster_soln.ipynb
AllenDowney/ThinkBayes2
mit
The mean should be near 3.7. We can run this simulation more efficiently using NumPy. First we generate a sample of xs:
xs = np.random.choice(a=[x1, x2], p=[0.3, 0.7], size=1000) Hist(xs)
examples/blaster_soln.ipynb
AllenDowney/ThinkBayes2
mit
Then for each x we generate a k:
ks = np.random.binomial(n, xs);
examples/blaster_soln.ipynb
AllenDowney/ThinkBayes2
mit
And the results look similar.
pmf = Pmf(ks) thinkplot.Hist(pmf) thinkplot.decorate(xlabel='Number of hits', ylabel='PMF') np.mean(ks)
examples/blaster_soln.ipynb
AllenDowney/ThinkBayes2
mit
One more way to do the same thing is to make a meta-Pmf, which contains the two binomial Pmf objects:
from thinkbayes2 import MakeBinomialPmf pmf1 = MakeBinomialPmf(n, x1) pmf2 = MakeBinomialPmf(n, x2) metapmf = Pmf({pmf1:0.3, pmf2:0.7}) metapmf.Print()
examples/blaster_soln.ipynb
AllenDowney/ThinkBayes2
mit
Here's how we can draw samples from the meta-Pmf:
ks = [metapmf.Random().Random() for _ in range(1000)];
examples/blaster_soln.ipynb
AllenDowney/ThinkBayes2
mit
And here are the results, one more time:
pmf = Pmf(ks) thinkplot.Hist(pmf) thinkplot.decorate(xlabel='Number of hits', ylabel='PMF') np.mean(ks)
examples/blaster_soln.ipynb
AllenDowney/ThinkBayes2
mit
This result, which we have estimated three ways, is a predictive distribution, based on our uncertainty about x. We can compute the mixture analtically using thinkbayes2.MakeMixture: def MakeMixture(metapmf, label='mix'): """Make a mixture distribution. Args: metapmf: Pmf that maps from Pmfs to probs. ...
from thinkbayes2 import MakeMixture mix = MakeMixture(metapmf) thinkplot.Hist(mix) mix.Mean() mix[3]
examples/blaster_soln.ipynb
AllenDowney/ThinkBayes2
mit
Get the sequence structure
#call RNAfold to get the sequence structure def _get_sequence_structure(seqs): if mode == 'RNAfold': return _rnafold_wrapper(seqs) else: raise Exception('Not known: %s'% self.mode) def _rnafold_wrapper(sequence): head = sequence[0] seq = sequence[1].split()[0] flags='--noPS' ...
Functions_Fasta_Input_to_Structure_and_Graph_modifing...-submit.ipynb
fabriziocosta/GraphFinder
gpl-2.0
Build the Graph
#Recognize basepairs and add them to the generated graph def _make_graph(head, seq, struc): print ("Graph title", head) open_pran = "(" close_pran = ")" stack_o = [] stack_c = [] G = nx.Graph() seq_struc_zip = zip(seq, struc) #print seq_struc_zip for i, k in enumerate(struc): ...
Functions_Fasta_Input_to_Structure_and_Graph_modifing...-submit.ipynb
fabriziocosta/GraphFinder
gpl-2.0
Experiment
#generating the graph #seq,seqs are Not correct they do Not take the zipped output zip_head_seqs= _sequeceWrapper(file_path) print ('zip_head_seqs here', zip_head_seqs) for i, seq in enumerate(zip_head_seqs): heads = seq[0] seq1 = seq[1] mode = 'RNAfold' head, seq, struc =_fold(seq) G = _make_graph...
Functions_Fasta_Input_to_Structure_and_Graph_modifing...-submit.ipynb
fabriziocosta/GraphFinder
gpl-2.0
Dans ce chapitre et les suivants, nous traitons de la programmation en Python. Les notes ici présentent les grandes lignes et les éléments principaux de ce sujet. Le lecteur désirant en savoir plus sera invité à consulter les chapitres 1 à 7 du livre en français de G. Swinnen Apprendre à programmer avec Python 3 [Swinn...
for a in range(9):
NotesDeCours/13-boucle-for.ipynb
seblabbe/MATH2010-Logiciels-mathematiques
gpl-3.0
La boucle for La boucle for permet aussi de parcourir les éléments d'une liste, une chaîne de caractères ou en général de tout objet itérable:
for a in [1,2,3,4]: for a in 'bonjour':
NotesDeCours/13-boucle-for.ipynb
seblabbe/MATH2010-Logiciels-mathematiques
gpl-3.0
En Python, une boucle for est identifiée par une ligne d'en-tête commançant par for se terminant par un deux-points : et avec la syntaxe for TRUC in MACHIN:. La convention est de toujours utiliser 4 espaces pour indenter les lignes du bloc d'instructions qui appartient à la boucle:
for i in liste: # ligne d'en-tête <ligne 1 du bloc d'instruction> <ligne 2 du bloc d'instruction> ... <ligne n du bloc d'instruction> <ligne exécutée après la boucle>
NotesDeCours/13-boucle-for.ipynb
seblabbe/MATH2010-Logiciels-mathematiques
gpl-3.0
Le bloc d'instructions est exécuté autant de fois qu'il y a d'éléments dans la liste. Le bloc d'instruction est exécuté une fois pour chaque valeur de la variable i dans la liste. Un exemple de boucle for avec Sympy Supposons que l'on désire factoriser le polynôme $x^k-1$ pour toutes les valeurs de $k=1,...,9$. En SymP...
from sympy import factor from sympy.abc import x factor(x**1-1) factor(x**2-1) factor(x**3-1) factor(x**4-1) factor(x**5-1) factor(x**6-1) factor(x**7-1) factor(x**8-1) factor(x**9-1)
NotesDeCours/13-boucle-for.ipynb
seblabbe/MATH2010-Logiciels-mathematiques
gpl-3.0