markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Exercício 1: Tamanho da matriz Escreva uma função dimensoes(matriz) que recebe uma matriz como parâmetro e imprime as dimensões da matriz recebida, no formato iXj. Exemplos: minha_matriz = [[1], [2], [3]] dimensoes(minha_matriz) 3X1 minha_matriz = [[1, 2, 3], [4, 5, 6]] dimensoes(minha_matriz) 2X3
def dimensoes(A): '''Função que recebe uma matriz como parâmetro e imprime as dimensões da matriz recebida, no formato iXj. Obs: i = colunas, j = linhas Exemplo: >>> minha_matriz = [[1], [2], [3] ] >>> dimensoe...
.ipynb_checkpoints/Curso Introdução à Ciência da Computação com Python - Parte 2-checkpoint.ipynb
marcelomiky/PythonCodes
mit
Exercício 2: Soma de matrizes Escreva a função soma_matrizes(m1, m2) que recebe 2 matrizes e devolve uma matriz que represente sua soma caso as matrizes tenham dimensões iguais. Caso contrário, a função deve devolver False. Exemplos: m1 = [[1, 2, 3], [4, 5, 6]] m2 = [[2, 3, 4], [5, 6, 7]] soma_matrizes(m1, m2) => [[3, ...
def soma_matrizes(m1, m2): def dimensoes(A): lin = len(A) col = len(A[0]) return ((lin, col)) if dimensoes(m1) != dimensoes(m2): return False else: matriz = [] for i in range(len(m1)): linha = [] for j in range(len(m1[0])...
.ipynb_checkpoints/Curso Introdução à Ciência da Computação com Python - Parte 2-checkpoint.ipynb
marcelomiky/PythonCodes
mit
Praticar tarefa de programação: Exercícios adicionais (opcionais) Exercício 1: Imprimindo matrizes Como proposto na primeira vídeo-aula da semana, escreva uma função imprime_matriz(matriz), que recebe uma matriz como parâmetro e imprime a matriz, linha por linha. Note que NÃO se deve imprimir espaços após o último elem...
def imprime_matriz(A): for i in range(len(A)): for j in range(len(A[i])): print(A[i][j]) minha_matriz = [[1], [2], [3]] imprime_matriz(minha_matriz) minha_matriz = [[1, 2, 3], [4, 5, 6]] imprime_matriz(minha_matriz)
.ipynb_checkpoints/Curso Introdução à Ciência da Computação com Python - Parte 2-checkpoint.ipynb
marcelomiky/PythonCodes
mit
Exercício 2: Matrizes multiplicáveis Duas matrizes são multiplicáveis se o número de colunas da primeira é igual ao número de linhas da segunda. Escreva a função sao_multiplicaveis(m1, m2) que recebe duas matrizes como parâmetro e devolve True se as matrizes forem multiplicavéis (na ordem dada) e False caso contrário. ...
def sao_multiplicaveis(m1, m2): '''Recebe duas matrizes como parâmetros e devolve True se as matrizes forem multiplicáveis (número de colunas da primeira é igual ao número de linhs da segunda). False se não forem ''' if len(m1) == len(m2[0]): return True else: return False...
.ipynb_checkpoints/Curso Introdução à Ciência da Computação com Python - Parte 2-checkpoint.ipynb
marcelomiky/PythonCodes
mit
The two core components of the conda ecosystem are the package cache and the environment subfolders. These are abstracted with PackageInfo and Environment objects respectively. Here we create "pools" of PackageInfo and Environment objects. These objects permit easy, read-only access to various bits of metadata stored...
# Create pkg_cache and environments pkg_cache = cache.packages(root_pkgs) envs = environment.environments(root_envs) print(pkg_cache[:5]) print() print(envs[:5])
Conda-tools.ipynb
groutr/conda-tools
bsd-3-clause
Packages Conda packages all have an info/ subdirectory for storing metadata about the package. PackageInfo provide convenient access to this metadata.
pi = pkg_cache[0] pi.index # info/index.json # We can access fields of index.json directly from the object. pi.name, pi.version, pi.build # Access to info/files pi.files # The full spec of the package. This is always "name-version-build" pi.full_spec # We can queries against the information we have on packages # ...
Conda-tools.ipynb
groutr/conda-tools
bsd-3-clause
Environments
e = envs[2] e # We can discover the currently activated environment {e.path: e.activated() for e in envs} # We can see all the packages that claim to be linked into the environment, keyed by name e.linked_packages # linked packages are either hard-linked, symlinked, or copied into environments. set(chain(e.hard_link...
Conda-tools.ipynb
groutr/conda-tools
bsd-3-clause
Neat stuff Convenient access to the package cache and environment metadata allows you to do some neat stuff relatively easily. Below are a few examples of some quick ideas that can be implemented with little effort.
# Calculate potential collisions in environments by packages claiming the same file paths # Very quick and naive way of detecting file path collisions. for i, p1 in enumerate(pkg_cache): for p2 in pkg_cache[i+1:]: if p1.name == p2.name: continue x = p1.files.intersection(p2.files) ...
Conda-tools.ipynb
groutr/conda-tools
bsd-3-clause
Loss layers: Softmax and SVM You implemented these loss functions in the last assignment, so we'll give them to you for free here. You should still make sure you understand how they work by looking at the implementations in cs231n/layers.py. You can make sure that the implementations are correct by running the followin...
num_classes, num_inputs = 10, 50 x = 0.001 * np.random.randn(num_inputs, num_classes) y = np.random.randint(num_classes, size=num_inputs) # [0] after svm_loss: indicate to get only the return of 0 dx_num = eval_numerical_gradient(lambda x: svm_loss(x, y)[0], x, verbose=False) loss, dx = svm_loss(x, y) # Test svm_loss...
assignment2/FullyConnectedNets.ipynb
hanezu/cs231n-assignment
mit
Solver In the previous assignment, the logic for training models was coupled to the models themselves. Following a more modular design, for this assignment we have split the logic for training models into a separate class. Open the file cs231n/solver.py and read through it to familiarize yourself with the API. After do...
model = TwoLayerNet() solver = None ############################################################################## # TODO: Use a Solver instance to train a TwoLayerNet that achieves at least # # 50% accuracy on the validation set. # ##############################################...
assignment2/FullyConnectedNets.ipynb
hanezu/cs231n-assignment
mit
As another sanity check, make sure you can overfit a small dataset of 50 images. First we will try a three-layer network with 100 units in each hidden layer. You will need to tweak the learning rate and initialization scale, but you should be able to overfit and achieve 100% training accuracy within 20 epochs.
# TODO: Use a three-layer Net to overfit 50 training examples. num_train = 50 small_data = { 'X_train': data['X_train'][:num_train], 'y_train': data['y_train'][:num_train], 'X_val': data['X_val'], 'y_val': data['y_val'], } weight_scale = 1e-2 learning_rate = 1e-2 model = FullyConnectedNet([100, 100], ...
assignment2/FullyConnectedNets.ipynb
hanezu/cs231n-assignment
mit
Now try to use a five-layer network with 100 units on each layer to overfit 50 training examples. Again you will have to adjust the learning rate and weight initialization, but you should be able to achieve 100% training accuracy within 20 epochs.
# TODO: Use a five-layer Net to overfit 50 training examples. num_train = 50 small_data = { 'X_train': data['X_train'][:num_train], 'y_train': data['y_train'][:num_train], 'X_val': data['X_val'], 'y_val': data['y_val'], } learning_rate = 3e-4 weight_scale = 1e-1 model = FullyConnectedNet([100, 100, 100, 100],...
assignment2/FullyConnectedNets.ipynb
hanezu/cs231n-assignment
mit
Inline question: Did you notice anything about the comparative difficulty of training the three-layer net vs training the five layer net? Answer: The three-layer net is much more easy to train and I just don't know how to train five layer one... The loss do not go down whatever learning rate & weight_scale are. I looke...
from cs231n.optim import sgd_momentum N, D = 4, 5 w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D) dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D) v = np.linspace(0.6, 0.9, num=N*D).reshape(N, D) config = {'learning_rate': 1e-3, 'velocity': v} next_w, _ = sgd_momentum(w, dw, config=config) expected_next_w = np.a...
assignment2/FullyConnectedNets.ipynb
hanezu/cs231n-assignment
mit
Train a good model! Train the best fully-connected model that you can on CIFAR-10, storing your best model in the best_model variable. We require you to get at least 50% accuracy on the validation set using a fully-connected net. If you are careful it should be possible to get accuracies above 55%, but we don't require...
best_model = None ################################################################################ # TODO: Train the best FullyConnectedNet that you can on CIFAR-10. You might # # batch normalization and dropout useful. Store your best model in the # # best_model variable. ...
assignment2/FullyConnectedNets.ipynb
hanezu/cs231n-assignment
mit
Test you model Run your best model on the validation and test sets. You should achieve above 50% accuracy on the validation set.
y_test_pred = np.argmax(best_model.loss(X_test), axis=1) y_val_pred = np.argmax(best_model.loss(X_val), axis=1) print 'Validation set accuracy: ', (y_val_pred == y_val).mean() print 'Test set accuracy: ', (y_test_pred == y_test).mean()
assignment2/FullyConnectedNets.ipynb
hanezu/cs231n-assignment
mit
Blobs 1
# Create blobs # The coordinates of the centers of our blobs. centers = [[0, 0], [-10, -10], [10, 10], [5,-5], [-5,5]] # Make 10,000 rows worth of data with two features representing three # clusters, each having a standard deviation of 1. X, y = make_blobs( n_samples=10000, centers=centers, cluster_std=1,...
DRILL+Mo%27+blobs%2C+mo%27+problems.ipynb
borja876/Thinkful-DataScience-Borja
mit
K-Means
# Normalize the data. X_norm = normalize(X_train) # Reduce it to two components. X_pca = PCA(2).fit_transform(X_train) # Calculate predicted values. y_pred = KMeans(n_clusters=4, random_state=42).fit_predict(X_pca) # Plot the solution. plt.scatter(X_pca[:, 0], X_pca[:, 1], c=y_pred) plt.show() # Check the solution ...
DRILL+Mo%27+blobs%2C+mo%27+problems.ipynb
borja876/Thinkful-DataScience-Borja
mit
Mean Shift Clustering
# Here we set the bandwidth. This function automatically derives a bandwidth # number based on an inspection of the distances among points in the data. bandwidth = estimate_bandwidth(X_train, quantile=0.2, n_samples=500) # Declare and fit the model. ms = MeanShift(bandwidth=bandwidth, bin_seeding=True) ms.fit(X_train)...
DRILL+Mo%27+blobs%2C+mo%27+problems.ipynb
borja876/Thinkful-DataScience-Borja
mit
Spectral Clustering
# We know we're looking for three clusters. n_clusters=4 # Declare and fit the model. sc = SpectralClustering(n_clusters=n_clusters) sc.fit(X_train) #Predicted clusters. predict=sc.fit_predict(X_train) #Graph results. plt.scatter(X_train[:, 0], X_train[:, 1], c=predict) plt.show() print('Comparing the assigned cate...
DRILL+Mo%27+blobs%2C+mo%27+problems.ipynb
borja876/Thinkful-DataScience-Borja
mit
Affinity Propagation
# Declare the model and fit it in one statement. # Note that you can provide arguments to the model, but we didn't. af = AffinityPropagation().fit(X_train) print('Done') # Pull the number of clusters and cluster assignments for each data point. cluster_centers_indices = af.cluster_centers_indices_ n_clusters_ = len(cl...
DRILL+Mo%27+blobs%2C+mo%27+problems.ipynb
borja876/Thinkful-DataScience-Borja
mit
Blobs 2
# Create blobs # The coordinates of the centers of our blobs. centers = [[0, 0], [-10, 0], [10, 0]] # Make 10,000 rows worth of data with two features representing three # clusters, each having a standard deviation of 1. X, y = make_blobs( n_samples=10000, centers=centers, cluster_std=1.5, n_features=2...
DRILL+Mo%27+blobs%2C+mo%27+problems.ipynb
borja876/Thinkful-DataScience-Borja
mit
K-Means
# Normalize the data. #X_norm = normalize(X_train) # Reduce it to two components. X_pca = PCA(2).fit_transform(X_train) # Calculate predicted values. y_pred = KMeans(n_clusters=3, random_state=42).fit_predict(X_pca) # Plot the solution. plt.scatter(X_pca[:, 0], X_pca[:, 1], c=y_pred) plt.show() # Check the solution...
DRILL+Mo%27+blobs%2C+mo%27+problems.ipynb
borja876/Thinkful-DataScience-Borja
mit
Spectral Clustering
# We know we're looking for three clusters. n_clusters=3 # Declare and fit the model. sc = SpectralClustering(n_clusters=n_clusters) sc.fit(X_train) #Predicted clusters. predict=sc.fit_predict(X_train) #Graph results. plt.scatter(X_train[:, 0], X_train[:, 1], c=predict) plt.show() print('Comparing the assigned cate...
DRILL+Mo%27+blobs%2C+mo%27+problems.ipynb
borja876/Thinkful-DataScience-Borja
mit
Blobs 3
# Create blobs # The coordinates of the centers of our blobs. centers = [[0, 0], [-3, 3], [5, 5]] # Make 10,000 rows worth of data with two features representing three # clusters, each having a standard deviation of 1. X, y = make_blobs( n_samples=10000, centers=centers, cluster_std=1, n_features=2, ...
DRILL+Mo%27+blobs%2C+mo%27+problems.ipynb
borja876/Thinkful-DataScience-Borja
mit
Useful keyboard shortcuts Enter edit mode: Enter Enter command mode: Escape In command mode: Show keyboard shortcuts: h Find and replace: f Insert a cell above the selection: a Insert a cell below the selection: b Switch to Markdown: m Delete the selected cells: dd (type twice 'd' quickly) Undo cell delet...
x = np.arange(-2 * np.pi, 2 * np.pi, 0.1) y = np.sin(x) plt.plot(x, y)
nb_dev_jupyter/notebook_snippets_en.ipynb
jdhp-docs/python_notebooks
mit
3D plots
from mpl_toolkits.mplot3d import axes3d # Build datas ############### x = np.arange(-5, 5, 0.25) y = np.arange(-5, 5, 0.25) xx,yy = np.meshgrid(x, y) z = np.sin(np.sqrt(xx**2 + yy**2)) # Plot data ################# fig = plt.figure() ax = axes3d.Axes3D(fig) ax.plot_wireframe(xx, yy, z) plt.show()
nb_dev_jupyter/notebook_snippets_en.ipynb
jdhp-docs/python_notebooks
mit
Animations
from matplotlib.animation import FuncAnimation # Plots fig, ax = plt.subplots() def update(frame): x = np.arange(frame/10., frame/10. + 2. * math.pi, 0.1) ax.clear() ax.plot(x, np.cos(x)) # Optional: save plots filename = "img_{:03}.png".format(frame) plt.savefig(filename) # Note: "interval"...
nb_dev_jupyter/notebook_snippets_en.ipynb
jdhp-docs/python_notebooks
mit
Interactive plots with Plotly TODO: https://plot.ly/ipython-notebooks/ Interactive plots with Bokeh TODO: http://bokeh.pydata.org/en/latest/docs/user_guide/notebook.html Embedded HTML and Javascript
%%html <div id="toc"></div> %%javascript var toc = document.getElementById("toc"); toc.innerHTML = "<b>Table of contents:</b>"; toc.innerHTML += "<ol>" var h_list = $("h2, h3"); //$("h2"); // document.getElementsByTagName("h2"); for(var i = 0 ; i < h_list.length ; i++) { var h = h_list[i]; var h_str = h...
nb_dev_jupyter/notebook_snippets_en.ipynb
jdhp-docs/python_notebooks
mit
IPython built-in magic commands See http://ipython.readthedocs.io/en/stable/interactive/magics.html Execute an external python script
%run ./notebook_snippets_run_test.py %run ./notebook_snippets_run_mpl_test.py
nb_dev_jupyter/notebook_snippets_en.ipynb
jdhp-docs/python_notebooks
mit
Load an external python script Load the full script
# %load ./notebook_snippets_run_mpl_test.py #!/usr/bin/env python3 # Copyright (c) 2012 Jérémie DECOCK (http://www.jdhp.org) # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restrict...
nb_dev_jupyter/notebook_snippets_en.ipynb
jdhp-docs/python_notebooks
mit
Load a specific symbol (funtion, class, ...)
# %load -s main ./notebook_snippets_run_mpl_test.py def main(): x = np.arange(-10, 10, 0.1) y = np.cos(x) plt.plot(x, y) plt.grid(True) plt.show()
nb_dev_jupyter/notebook_snippets_en.ipynb
jdhp-docs/python_notebooks
mit
Load specific lines
# %load -r 22-41 ./notebook_snippets_run_mpl_test.py """ This module has been written to illustrate the ``%run`` magic command in ``notebook_snippets.ipynb``. """ import numpy as np import matplotlib.pyplot as plt def main(): x = np.arange(-10, 10, 0.1) y = np.cos(x) plt.plot(x, y) plt.grid(True) ...
nb_dev_jupyter/notebook_snippets_en.ipynb
jdhp-docs/python_notebooks
mit
Time measurement %time
%%time plt.hist(np.random.normal(loc=0.0, scale=1.0, size=100000), bins=50)
nb_dev_jupyter/notebook_snippets_en.ipynb
jdhp-docs/python_notebooks
mit
%timeit
%%timeit plt.hist(np.random.normal(loc=0.0, scale=1.0, size=100000), bins=50)
nb_dev_jupyter/notebook_snippets_en.ipynb
jdhp-docs/python_notebooks
mit
ipywidget On jupyter lab, you should install widgets extension first (see https://ipywidgets.readthedocs.io/en/latest/user_install.html#installing-the-jupyterlab-extension): jupyter labextension install @jupyter-widgets/jupyterlab-manager
#help(ipywidgets) #dir(ipywidgets) from ipywidgets import IntSlider from IPython.display import display slider = IntSlider(min=1, max=10) display(slider)
nb_dev_jupyter/notebook_snippets_en.ipynb
jdhp-docs/python_notebooks
mit
ipywidgets.interact Documentation See http://ipywidgets.readthedocs.io/en/latest/examples/Using%20Interact.html
#help(ipywidgets.interact)
nb_dev_jupyter/notebook_snippets_en.ipynb
jdhp-docs/python_notebooks
mit
Using interact as a decorator with named parameters To me, this is the best option for single usage functions... Text
@interact(text="IPython Widgets") def greeting(text): print("Hello {}".format(text))
nb_dev_jupyter/notebook_snippets_en.ipynb
jdhp-docs/python_notebooks
mit
Integer (IntSlider)
@interact(num=5) def square(num): print("{} squared is {}".format(num, num*num)) @interact(num=(0, 100)) def square(num): print("{} squared is {}".format(num, num*num)) @interact(num=(0, 100, 10)) def square(num): print("{} squared is {}".format(num, num*num))
nb_dev_jupyter/notebook_snippets_en.ipynb
jdhp-docs/python_notebooks
mit
Float (FloatSlider)
@interact(num=5.) def square(num): print("{} squared is {}".format(num, num*num)) @interact(num=(0., 10.)) def square(num): print("{} squared is {}".format(num, num*num)) @interact(num=(0., 10., 0.5)) def square(num): print("{} squared is {}".format(num, num*num))
nb_dev_jupyter/notebook_snippets_en.ipynb
jdhp-docs/python_notebooks
mit
Boolean (Checkbox)
@interact(upper=False) def greeting(upper): text = "hello" if upper: print(text.upper()) else: print(text.lower())
nb_dev_jupyter/notebook_snippets_en.ipynb
jdhp-docs/python_notebooks
mit
List (Dropdown)
@interact(name=["John", "Bob", "Alice"]) def greeting(name): print("Hello {}".format(name))
nb_dev_jupyter/notebook_snippets_en.ipynb
jdhp-docs/python_notebooks
mit
Dictionnary (Dropdown)
@interact(word={"One": "Un", "Two": "Deux", "Three": "Trois"}) def translate(word): print(word) x = np.arange(-2 * np.pi, 2 * np.pi, 0.1) @interact(function={"Sin": np.sin, "Cos": np.cos}) def plot(function): y = function(x) plt.plot(x, y)
nb_dev_jupyter/notebook_snippets_en.ipynb
jdhp-docs/python_notebooks
mit
Using interact as a decorator Text
@interact def greeting(text="World"): print("Hello {}".format(text))
nb_dev_jupyter/notebook_snippets_en.ipynb
jdhp-docs/python_notebooks
mit
Integer (IntSlider)
@interact def square(num=2): print("{} squared is {}".format(num, num*num)) @interact def square(num=(0, 100)): print("{} squared is {}".format(num, num*num)) @interact def square(num=(0, 100, 10)): print("{} squared is {}".format(num, num*num))
nb_dev_jupyter/notebook_snippets_en.ipynb
jdhp-docs/python_notebooks
mit
Float (FloatSlider)
@interact def square(num=5.): print("{} squared is {}".format(num, num*num)) @interact def square(num=(0., 10.)): print("{} squared is {}".format(num, num*num)) @interact def square(num=(0., 10., 0.5)): print("{} squared is {}".format(num, num*num))
nb_dev_jupyter/notebook_snippets_en.ipynb
jdhp-docs/python_notebooks
mit
Boolean (Checkbox)
@interact def greeting(upper=False): text = "hello" if upper: print(text.upper()) else: print(text.lower())
nb_dev_jupyter/notebook_snippets_en.ipynb
jdhp-docs/python_notebooks
mit
List (Dropdown)
@interact def greeting(name=["John", "Bob", "Alice"]): print("Hello {}".format(name))
nb_dev_jupyter/notebook_snippets_en.ipynb
jdhp-docs/python_notebooks
mit
Dictionnary (Dropdown)
@interact def translate(word={"One": "Un", "Two": "Deux", "Three": "Trois"}): print(word) x = np.arange(-2 * np.pi, 2 * np.pi, 0.1) @interact def plot(function={"Sin": np.sin, "Cos": np.cos}): y = function(x) plt.plot(x, y)
nb_dev_jupyter/notebook_snippets_en.ipynb
jdhp-docs/python_notebooks
mit
Using interact as a function To me, this is the best option for multiple usage functions... Text
def greeting(text): print("Hello {}".format(text)) interact(greeting, text="IPython Widgets")
nb_dev_jupyter/notebook_snippets_en.ipynb
jdhp-docs/python_notebooks
mit
Integer (IntSlider)
def square(num): print("{} squared is {}".format(num, num*num)) interact(square, num=5) def square(num): print("{} squared is {}".format(num, num*num)) interact(square, num=(0, 100)) def square(num): print("{} squared is {}".format(num, num*num)) interact(square, num=(0, 100, 10))
nb_dev_jupyter/notebook_snippets_en.ipynb
jdhp-docs/python_notebooks
mit
Float (FloatSlider)
def square(num): print("{} squared is {}".format(num, num*num)) interact(square, num=5.) def square(num): print("{} squared is {}".format(num, num*num)) interact(square, num=(0., 10.)) def square(num): print("{} squared is {}".format(num, num*num)) interact(square, num=(0., 10., 0.5))
nb_dev_jupyter/notebook_snippets_en.ipynb
jdhp-docs/python_notebooks
mit
Boolean (Checkbox)
def greeting(upper): text = "hello" if upper: print(text.upper()) else: print(text.lower()) interact(greeting, upper=False)
nb_dev_jupyter/notebook_snippets_en.ipynb
jdhp-docs/python_notebooks
mit
List (Dropdown)
def greeting(name): print("Hello {}".format(name)) interact(greeting, name=["John", "Bob", "Alice"])
nb_dev_jupyter/notebook_snippets_en.ipynb
jdhp-docs/python_notebooks
mit
Dictionnary (Dropdown)
def translate(word): print(word) interact(translate, word={"One": "Un", "Two": "Deux", "Three": "Trois"}) x = np.arange(-2 * np.pi, 2 * np.pi, 0.1) def plot(function): y = function(x) plt.plot(x, y) interact(plot, function={"Sin": np.sin, "Cos": np.cos})
nb_dev_jupyter/notebook_snippets_en.ipynb
jdhp-docs/python_notebooks
mit
Example of using multiple widgets on one function
@interact(upper=False, name=["john", "bob", "alice"]) def greeting(upper, name): text = "hello {}".format(name) if upper: print(text.upper()) else: print(text.lower())
nb_dev_jupyter/notebook_snippets_en.ipynb
jdhp-docs/python_notebooks
mit
Display images (PNG, JPEG, GIF, ...) Within a code cell (using IPython.display)
from IPython.display import Image Image("fourier.gif")
nb_dev_jupyter/notebook_snippets_en.ipynb
jdhp-docs/python_notebooks
mit
Within a Markdown cell Sound player widget See: https://ipython.org/ipython-doc/dev/api/generated/IPython.display.html#IPython.display.Audio
from IPython.display import Audio
nb_dev_jupyter/notebook_snippets_en.ipynb
jdhp-docs/python_notebooks
mit
Generate a sound
framerate = 44100 t = np.linspace(0, 5, framerate*5) data = np.sin(2*np.pi*220*t) + np.sin(2*np.pi*224*t) Audio(data, rate=framerate)
nb_dev_jupyter/notebook_snippets_en.ipynb
jdhp-docs/python_notebooks
mit
Generate a multi-channel (stereo or more) sound
data_left = np.sin(2 * np.pi * 220 * t) data_right = np.sin(2 * np.pi * 224 * t) Audio([data_left, data_right], rate=framerate)
nb_dev_jupyter/notebook_snippets_en.ipynb
jdhp-docs/python_notebooks
mit
From URL
Audio("http://www.nch.com.au/acm/8k16bitpcm.wav") Audio(url="http://www.w3schools.com/html/horse.ogg")
nb_dev_jupyter/notebook_snippets_en.ipynb
jdhp-docs/python_notebooks
mit
From file
#Audio('/path/to/sound.wav') #Audio(filename='/path/to/sound.ogg')
nb_dev_jupyter/notebook_snippets_en.ipynb
jdhp-docs/python_notebooks
mit
From bytes
#Audio(b'RAW_WAV_DATA..) #Audio(data=b'RAW_WAV_DATA..)
nb_dev_jupyter/notebook_snippets_en.ipynb
jdhp-docs/python_notebooks
mit
Youtube widget Class for embedding a YouTube Video in an IPython session, based on its video id. e.g. to embed the video from https://www.youtube.com/watch?v=0HlRtU8clt4 , you would do: See https://ipython.org/ipython-doc/dev/api/generated/IPython.display.html#IPython.display.YouTubeVideo
from IPython.display import YouTubeVideo vid = YouTubeVideo("0HlRtU8clt4") display(vid)
nb_dev_jupyter/notebook_snippets_en.ipynb
jdhp-docs/python_notebooks
mit
Variables y asignaciones
a = 8 b = 2 * a b a + b a = a + 1 a a, b = 2, 5 a + 2 * b
Programacion_imperativa_en_Python.ipynb
jaalonso/AFV
gpl-3.0
Definición de funciones Ejercicio 1. Definir la función suma tal que suma(x, y) es la suma de x e y. . Por ejemplo, ~~~ suma(2,3) 5 ~~~
def suma(x, y): return x+y suma(2, 3)
Programacion_imperativa_en_Python.ipynb
jaalonso/AFV
gpl-3.0
Escritura y lectura Ejercicio 2. Definir el procedimiento suma que lea dos números y escriba su suma. Por ejemplo, ~~~ suma() Escribe el primer número: 2 Escribe el segundo número: 3 La suma es: 5 ~~~
def suma(): a = eval(input("Escribe el primer número: ")) b = eval(input("Escribe el segundo número: ")) print("La suma es:",a+b) suma()
Programacion_imperativa_en_Python.ipynb
jaalonso/AFV
gpl-3.0
La estructura condicional Condicionales simples Ejercicio 3. Definir, usando condicionales, la función maximo tal que maximo(x,y) es el máximo de x e y. Por ejemplo, ~~~ maximo(2, 5) 5 maximo(2, 1) 2 ~~~
def maximo(x, y) : if x > y: return x else: return y maximo(2, 5) maximo (2, 1)
Programacion_imperativa_en_Python.ipynb
jaalonso/AFV
gpl-3.0
Condicionales múltiples Ejercicio 4. Definir la función signo tal que signo(x) es el signo de x. Por ejemplo, ~~~ signo(5) 1 signo(-7) - 1 signo(0) 0 ~~~
def signo(x): if x > 0: return 1 elif x < 0: return -1 else: return 0 signo(5) signo(-7) signo(0)
Programacion_imperativa_en_Python.ipynb
jaalonso/AFV
gpl-3.0
Estructuras iterativas Bucles mientras Ejercicio 5. Definir, con un bucle while, la función sumaImpares tal que sumaImpares(n) es la suma de los n primeros números impares. Por ejemplo, ~~~ sumaImpares(3) 9 sumaImpares(4) 16 ~~~
def sumaImpares(n): s, k = 0, 0 while k < n: s = s + 2*k + 1 k = k + 1 return s sumaImpares(3) sumaImpares(4)
Programacion_imperativa_en_Python.ipynb
jaalonso/AFV
gpl-3.0
Ejercicio 6. Definir la función mayorExponente tal que mayorExponente(a,n) es el mayor k tal que a^k divide a n. Por ejemplo, ~~~ mayorExponente(2,40); 3 ~~~
def mayorExponente(a, n): k = 0 while (n % a == 0): n = n/a k = k + 1 return k mayorExponente(2, 40)
Programacion_imperativa_en_Python.ipynb
jaalonso/AFV
gpl-3.0
Bucle para Ejercicio 7. Definir, por iteración con for, la función fact tal que fact(n) es el factorial de n. Por ejemplo, ~~~ fact 4 24 ~~~
def fact(n): f = 1 for k in range(1,n+1): f = f * k return f fact(4)
Programacion_imperativa_en_Python.ipynb
jaalonso/AFV
gpl-3.0
Bucle para sobre listas Ejercicio 8. Definir, por iteración, la función suma tal que suma(xs) es a suma de los números de la lista xs. Por ejemplo, ~~~ suma([3,2,5]) 10 ~~~
def suma(xs): r = 0 for x in xs: r = x + r return r suma([3, 2, 5])
Programacion_imperativa_en_Python.ipynb
jaalonso/AFV
gpl-3.0
Recursión Ejercicio 9. Definir, por recursión, la función fact tal que factR(n) es el factorial de n. Por ejemplo, ~~~ fact 4 24 ~~~
def fact(n): if n == 0: return 1 else: return n * fact(n-1) fact(4)
Programacion_imperativa_en_Python.ipynb
jaalonso/AFV
gpl-3.0
Cheatsheet for Decision Tree Classification Algorithm Start at the root node as parent node Split the parent node at the feature a to minimize the sum of the child node impurities (maximize information gain) Assign training samples to new child nodes Stop if leave nodes are pure or early stopping criteria is satisfied...
import numpy as np import matplotlib.pyplot as plt %matplotlib inline def entropy(p): return - p*np.log2(p) - (1 - p)*np.log2((1 - p)) x = np.arange(0.0, 1.0, 0.01) ent = [entropy(p) if p != 0 else None for p in x] plt.plot(x, ent) plt.ylim([0,1.1]) plt.xlabel('p(i=1)') plt.axhline(y=1.0, linewidth=1, color='k', ...
machine_learning/decision_trees/decision-tree-cheatsheet.ipynb
mhdella/pattern_classification
gpl-3.0
Gini Impurity $$I_G(t) = \sum_{i =1}^{C}p(i \mid t) \big(1-p(i \mid t)\big)$$
def gini(p): return (p)*(1 - (p)) + (1-p)*(1 - (1-p)) x = np.arange(0.0, 1.0, 0.01) plt.plot(x, gini(x)) plt.ylim([0,1.1]) plt.xlabel('p(i=1)') plt.axhline(y=0.5, linewidth=1, color='k', linestyle='--') plt.ylabel('Gini Impurity') plt.show()
machine_learning/decision_trees/decision-tree-cheatsheet.ipynb
mhdella/pattern_classification
gpl-3.0
Misclassification Error $$I_M(t) = 1 - max{{p_i}}$$
def error(p): return 1 - np.max([p, 1-p]) x = np.arange(0.0, 1.0, 0.01) err = [error(i) for i in x] plt.plot(x, err) plt.ylim([0,1.1]) plt.xlabel('p(i=1)') plt.axhline(y=0.5, linewidth=1, color='k', linestyle='--') plt.ylabel('Misclassification Error') plt.show()
machine_learning/decision_trees/decision-tree-cheatsheet.ipynb
mhdella/pattern_classification
gpl-3.0
Comparison
fig = plt.figure() ax = plt.subplot(111) for i, lab in zip([ent, gini(x), err], ['Entropy', 'Gini Impurity', 'Misclassification Error']): line, = ax.plot(x, i, label=lab) ax.legend(loc='upper center', bbox_to_anchor=(0.5, 1.15), ncol=3, fancybox=True, shadow=False) ax.axhline(y=0.5, ...
machine_learning/decision_trees/decision-tree-cheatsheet.ipynb
mhdella/pattern_classification
gpl-3.0
We also include some code from SciPy for numerical calculations
from scipy.linalg import solve_toeplitz # a matrix equation solver from scipy.integrate import cumtrapz # a numerical integrator, using trapezoid rule
PDE_Solve_widget.ipynb
mlamoureux/PIMS_YRC
mit
And we include some code to create the graphical user interface -- namely, sliders. You can read about them here: http://bokeh.pydata.org/en/0.10.0/_images/notebook_interactors.png
from ipywidgets import interact
PDE_Solve_widget.ipynb
mlamoureux/PIMS_YRC
mit
Explicit method of solution in finite differences The finite difference method is probably the most common numerical method for solving PDEs. The derivatives are approximated by Newton difference ratios, and you step through in time to progress the solution from $t=0$ to some ending time. The following defines a functi...
# Based on Section 14.4 in Myint-U and Debnath's book def w_solve1(c2,dx,dt,t_len,u0,u1): x_len = np.size(u0) # the length of u0 implicitly defines the num of points in x direction u = np.zeros((x_len,t_len),order='F') # output array initialized to zero e2 = c2*dt*dt/(dx*dx) # Courant parameter squared (...
PDE_Solve_widget.ipynb
mlamoureux/PIMS_YRC
mit
Let's try a real wave equation solution. We start with a simple triangle waveform.
x_len = 1000 t_len = 1000 dx = 1./x_len dt = 1./t_len x = np.linspace(0,1,x_len) t = np.linspace(0,1,t_len) triangle = np.maximum(0.,.1-np.absolute(x-.4))
PDE_Solve_widget.ipynb
mlamoureux/PIMS_YRC
mit
Now we call up our wave equation solver, using the parameters above
# Here we solve the wave equation, with initial position $u(x,0)$ set to the triangle waveform (u,x,t)=w_solve1(.5,dx,dt,t_len,triangle,0*triangle)
PDE_Solve_widget.ipynb
mlamoureux/PIMS_YRC
mit
We can plot the inital waveform, just to see what it looks like.
plot(x,u[:,0]) def update(k=0): plot(x,u[:,k]) show()
PDE_Solve_widget.ipynb
mlamoureux/PIMS_YRC
mit
And the next cell sets up a slider which controls the above graphs (it moves time along)
# This runs an animation, controlled by a slider which advances time interact(update,k=(0,t_len-1))
PDE_Solve_widget.ipynb
mlamoureux/PIMS_YRC
mit
Derivative initial condition test
# We try again, but this time with the $u_t$ initial condition equal to the triangle impulse (u,x,t)=w_solve1(.5,dx,dt,3*t_len,0*triangle,1*triangle)
PDE_Solve_widget.ipynb
mlamoureux/PIMS_YRC
mit
We can use the same update function, since nothing has changed.
interact(update,k=(0,3*t_len-1))
PDE_Solve_widget.ipynb
mlamoureux/PIMS_YRC
mit
Implicit method Here we try an implicit method for solving the wave equation. Again from Myint-U and Debnath's book, in Section 14.5, part (B) on Hyperbolic equations. We need to use scipy libraries as we have to solve a system of linear equation. In fact the system is tridiagonal and Toepliz, so this should be fast. ...
# Based on Section 14.5 (B) in Myint-U and Debnath's book def w_solve2(c2,dx,dt,t_len,u0,u1): x_len = np.size(u0) # the length of u0 implicitly defines the num of points in x direction u = np.zeros((x_len,t_len),order='F') # output array initialized to zero e2 = c2*dt*dt/(dx*dx) # Courant parameter squar...
PDE_Solve_widget.ipynb
mlamoureux/PIMS_YRC
mit
Derivative initial condition
(u,x,t)=w_solve2(.5,dx,dt,3*t_len,0*triangle,1*triangle) interact(update,k=(0,3*t_len-1))
PDE_Solve_widget.ipynb
mlamoureux/PIMS_YRC
mit
D'Alembert's solution Since the velocity $c$ is constant in these examples, we can get the exact solution via D'Alembert. The general solution will be of the form $$u(x,t) = \phi(x+ct) + \psi(x-ct). $$ Initial conditions tell use that $$u(x,0) = \phi(x) + \psi(x) = f(x), $$ and $$u_t(x,0) = c\phi'(x) - c\psi'(x) = g(x...
# Based on D'Alembert's solution, as described above def w_solve3(c2,dx,dt,t_len,u0,u1): x_len = np.size(u0) # the length of u0 implicitly defines the num of points in x direction u = np.zeros((x_len,t_len),order='F') # output array initialized to zero c = np.sqrt(c2) # the actual velocity parameter is n...
PDE_Solve_widget.ipynb
mlamoureux/PIMS_YRC
mit
Derivative initial condition
(u,x,t)=w_solve3(.5,dx,dt,3*t_len,0*triangle,1*triangle) interact(update,k=(0,3*t_len-1))
PDE_Solve_widget.ipynb
mlamoureux/PIMS_YRC
mit
Comparing solutions In principle, we want these different solution methods to be directly comparable. So let's try this out, by computing the difference of two solution. Here we compare the explicit f.d. method with d'Alambert's method.
(u_exp,x,t)=w_solve1(.5,dx,dt,t_len,1*triangle,0*triangle) (u_dal,x,t)=w_solve3(.5,dx,dt,t_len,1*triangle,0*triangle) def update2(k=0): plot(x,u_dal[:,k]-u_exp[:,k]) show() interact(update2,k=(0,t_len-1))
PDE_Solve_widget.ipynb
mlamoureux/PIMS_YRC
mit
A moving wavefront Let's try an actual wave. We want something like $$u(x,t) = \exp(-(x -x_a-ct)^2/w^2), $$ where $x_a$ is the center of the Gaussian at $t=0$, $w$ is the width of the Gaussian, $c$ is the velocity of the wave. This gives $$u_0(x) = \exp(-(x -x_a)^2/w^2) \ u_1(x) = \frac{2c(x-x_a)}{w^2}\exp(-(x -x_a)^...
c = .707 # velocity x_len = 1000 t_len = 1000 dx = 1./x_len dt = 1./t_len x = np.linspace(0,1,x_len) t = np.linspace(0,1,t_len) u0 = np.exp(-(x-.5)*(x-.5)/.01) u1 = 2*c*u0*(x-.5)/.01 (u,x,t)=w_solve3(c*c,dx,dt,t_len,u0,u1) # notice we input the velocity squared! interact(update,k=(0,t_len-1))
PDE_Solve_widget.ipynb
mlamoureux/PIMS_YRC
mit
Here are the functions we wrote in the previous tutorial to compute and draw from a GP:
import numpy as np from scipy.linalg import cho_factor def ExpSquaredKernel(t1, t2=None, A=1.0, l=1.0): """ Return the ``N x M`` exponential squared covariance matrix between time vectors `t1` and `t2`. The kernel has amplitude `A` and lengthscale `l`. """ if t2 is None: t2 = ...
Sessions/Session09/Day1/gps/02-Inference.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
The Marginal Likelihood In the previous notebook, we learned how to construct and sample from a simple GP. This is useful for making predictions, i.e., interpolating or extrapolating based on the data you measured. But the true power of GPs comes from their application to regression and inference: given a dataset $D$ a...
def ln_gp_likelihood(t, y, sigma=0, A=1.0, l=1.0): """ """ # do stuff in here pass
Sessions/Session09/Day1/gps/02-Inference.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
<div style="background-color: #D6EAF8; border-left: 15px solid #2E86C1;"> <h1 style="line-height:2.5em; margin-left:1em;">Exercise 2</h1> </div> The following dataset was generated from a zero-mean Gaussian Process with a Squared Exponential Kernel of unity amplitude and unknown timescale. Compute the marginal log...
import matplotlib.pyplot as plt t, y, sigma = np.loadtxt("data/sample_data.txt", unpack=True) plt.plot(t, y, "k.", alpha=0.5, ms=3) plt.xlabel("time") plt.ylabel("data");
Sessions/Session09/Day1/gps/02-Inference.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
<div style="background-color: #D6EAF8; border-left: 15px solid #2E86C1;"> <h1 style="line-height:2.5em; margin-left:1em;">Exercise 3a</h1> </div> The timeseries below was generated by a linear function of time, $y(t)= mt + b$. In addition to observational uncertainty $\sigma$ (white noise), there is a fair bit of ...
t, y, sigma = np.loadtxt("data/sample_data_line.txt", unpack=True) m_true, b_true, A_true, l_true = np.loadtxt("data/sample_data_line_truths.txt", unpack=True) plt.errorbar(t, y, yerr=sigma, fmt="k.", label="observed") plt.plot(t, m_true * t + b_true, color="C0", label="truth") plt.legend(fontsize=12) plt.xlabel("time"...
Sessions/Session09/Day1/gps/02-Inference.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
I believe some preprocessing to remove elements or comments that don't "fit" a table would help. Manually, I copied and pasted spreadsheet cells with only tables and saved them as new files. Then I will load those.
skieval2d_periodic = pd.read_excel(SUBDIR+"ski-eval_2d_periodic_abridged.xlsx") skieval2d_periodic
passiveSkis/passiveSkis.ipynb
ernestyalumni/servetheloop
mit
Then I can immediately make some quick plots. For instance, for each width in inch, I can plot drag or lift vs. velocity (m/s):
ax1 = skieval2d_periodic.ix[skieval2d_periodic['inch']==1].plot.area(x="m/s",y="drag",color="Red",label="1 in") ax2 = skieval2d_periodic.ix[skieval2d_periodic['inch']==2].plot.area(x="m/s",y="drag",color="Green",label="2 in",ax=ax1) ax3 = skieval2d_periodic.ix[skieval2d_periodic['inch']==3].plot.area(x="m/s",y="drag"...
passiveSkis/passiveSkis.ipynb
ernestyalumni/servetheloop
mit
Import required modules
# Generate plots inline %matplotlib inline import json import os # Support to access the remote target import devlib from env import TestEnv from executor import Executor # RTApp configurator for generation of PERIODIC tasks from wlgen import RTA, Ramp # Support for trace events analysis from trace import Trace
ipynb/examples/trace_analysis/TraceAnalysis_FunctionsProfiling.ipynb
joelagnel/lisa
apache-2.0
Target Configuration The target configuration is used to describe and configure your test environment. You can find more details in examples/utils/testenv_example.ipynb.
# Setup target configuration my_conf = { # Target platform and board "platform" : 'linux', "board" : 'juno', "host" : '192.168.0.1', "password" : 'juno', # Folder where all the results will be collected "results_dir" : "TraceAnalysis_FunctionsProfiling", # Define de...
ipynb/examples/trace_analysis/TraceAnalysis_FunctionsProfiling.ipynb
joelagnel/lisa
apache-2.0