markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
SVD features of edges decomposed from incidence matrix
# SVD features of edges decomposed from incidence matrix fig = plt.figure(figsize=(10,10)) colors=['green','hotpink','yellow', 'cyan','red','purple'] svd = fig.add_subplot(1,1,1) vi1 = np.transpose(vi) svd.scatter([vi1[:, 0]], [vi1[:, 1]],c=np.array(ed_label),s=[50,50],cmap=matplotlib.colors.ListedColormap(colors)) svd...
_____no_output_____
MIT
incidence-mat-exp.ipynb
supriya-pandhre/incidence-mat-exp
NORMALIZED GRAPH LAPLACIAN Decomposing normalized laplacian and plotting node features(W)
# calculate normalized graph laplacian L = nx.normalized_laplacian_matrix(G).todense() print(L.shape) print(L[0,0:5]) # NMF does not work on input matrix with negative values # from sklearn.decomposition import NMF # model = NMF(n_components=2,init='random', random_state=0) # # decomposing normalized graph laplacian ...
_____no_output_____
MIT
incidence-mat-exp.ipynb
supriya-pandhre/incidence-mat-exp
SVD decomposition of normalized graph laplacian
# SVD decomposition ul,sl,vl = np.linalg.svd(L) print(ul.shape) # u=np.around(u,decimals=5) # print(ui) print(sl.shape) # s=np.around(s) # print(si) print(vl.shape) # v=np.around(v,decimals=5) # print(vi)
(30, 30) (30,) (30, 30)
MIT
incidence-mat-exp.ipynb
supriya-pandhre/incidence-mat-exp
displaying SVD node features(U) of laplacian matrix Doing SVD on normalized graph laplacian gives USV^T, where U and V are same, i.e. rows of U are same as clms of V^T. Hence below I displayed node features from U
import matplotlib import numpy as np fig = plt.figure(figsize=(10,10)) colors=['green','hotpink','yellow', 'cyan','red','purple'] svd = fig.add_subplot(1,1,1) svd.scatter([ul[:, 0]], [ul[:, 1]],c=np.array(list(partition.values())),s=[50,50],cmap=matplotlib.colors.ListedColormap(colors)) svd.title.set_text("U-nodes:SV...
_____no_output_____
MIT
incidence-mat-exp.ipynb
supriya-pandhre/incidence-mat-exp
ADJACENCY MATRIX Decomposing Adjacency matrix and displaying node featues
Adj = nx.adjacency_matrix(G) print(Adj.todense().shape) # convert adjacency matrix to dense matrix(default format is sparse matrix) AdjDense = Adj.todense()
(30, 30)
MIT
incidence-mat-exp.ipynb
supriya-pandhre/incidence-mat-exp
NMF decomposition of Adjacency matrix
from sklearn.decomposition import NMF model = NMF(n_components=2,init='random', random_state=0) Wa = model.fit_transform(AdjDense) Ha= model.components_ erra = model.reconstruction_err_ ita = model.n_iter_ print(erra) print(ita) print(Wa.shape) print(Ha.shape) print(Wa[0]) print(Ha[:,0]) # displaying learned nodes imp...
_____no_output_____
MIT
incidence-mat-exp.ipynb
supriya-pandhre/incidence-mat-exp
SVD Decomposition of adjacency matrix
# Calculate SVD (Singular value decomposition) of graph's adjacency matrix ua,sa,va = np.linalg.svd(AdjDense) print(ua.shape) # u=np.around(u,decimals=3) # print(u) print(sa.shape) # s=np.around(s) # print(s) print(va.shape) # v=np.around(v,decimals=3) # print(v) import matplotlib import numpy as np fig = plt.figur...
_____no_output_____
MIT
incidence-mat-exp.ipynb
supriya-pandhre/incidence-mat-exp
Chellenge 3 Challenge 3.1
myinput = '/home/fmuinos/projects/adventofcode/2016/ferran/inputs/input3.txt' def is_triangle(sides): return sides[0] + sides[1] > sides[2] def no_triangles(path): with open(path,'rt') as f: ntr = 0 for line in f: sides = list(map(int, line.rstrip().split())) if is_triang...
_____no_output_____
MIT
2016/ferran/day3.ipynb
bbglab/adventofcode
Challenge 3.2
def no_triangles_by_cols(path): triangles = [[0,0,0], [0,0,0], [0,0,0]] with open(path,'rt') as f: ntr = 0 i = 1 for line in f: sides = list(map(int, line.rstrip().split())) for j in range(3): triangles[j][i % 3] = sides[j] if i % 3 == ...
_____no_output_____
MIT
2016/ferran/day3.ipynb
bbglab/adventofcode
Agenda **Tópicos**:* Revisão de Python: - Variáveis - Operações Matemáticas* Exercício Prático (Hands on)* Carregamento dos dados* Visualizações Revisão de Python Variáveis Uma variável é um objeto que guarda um valor e armazena esse valor na memória do computador durante o tempo de desenvolvimento. Podemos inicial...
# podemos definir uma variável dando um nome ano = 2020 # para imprimir a variável criada, utilizamos a função print print(ano) salario = 1500 print(salario) salario = 1000 print(salario)
1500 1000
MIT
Primeiros_Passos_Christian_Python.ipynb
ChristianEngProd/HTML_Teste
Operações Matemáticas Com Python podemos realizar operações matemáticas. Com o uso das variáveis isso fica ainda mais poderoso.
salario1 = 1500 salario2 = 1000 print(salario1 + salario2) #soma print(salario1 - salario2) #subtração print(salario1 * salario2) #multiplicação print(salario1 / salario2) #divisão print(salario1 // salario2) #divisão inteira print(salario1 % salario2) #resto da divisão print(salario1 ** 2) #exponenciação
2500 500 1500000 1.5 1 500 2250000
MIT
Primeiros_Passos_Christian_Python.ipynb
ChristianEngProd/HTML_Teste
Exercício Prático (Hands on)* 1. Abrir Google Colab: https://colab.research.google.com/* 2. Login na conta Google* 3. Arquivo --> Novo notebook Carregamento dos dados
#Biblioteca Pandas import pandas as pd #Carregando bases de dados de Jan22 a Mar22 #Fonte: https://www.gov.br/anp/pt-br/centrais-de-conteudo/dados-abertos/serie-historica-de-precos-de-combustiveis etanol_202201 = pd.read_csv('https://github.com/marioandrededeus/semana_sala_aberta_DH/raw/main/precos-gasolina-etanol-202...
_____no_output_____
MIT
Primeiros_Passos_Christian_Python.ipynb
ChristianEngProd/HTML_Teste
Dimensões do dataframe (tabela)
df.shape
_____no_output_____
MIT
Primeiros_Passos_Christian_Python.ipynb
ChristianEngProd/HTML_Teste
Visualizações Preço por Estado
df_estado = df.groupby('Estado')['Valor de Venda'].mean() df_estado df_estado.plot.bar(figsize = (20,5));
_____no_output_____
MIT
Primeiros_Passos_Christian_Python.ipynb
ChristianEngProd/HTML_Teste
Preço por Regiao
df_regiao = df.groupby('Regiao')['Valor de Venda'].mean() df_regiao df_regiao.plot.bar(figsize = (10,5));
_____no_output_____
MIT
Primeiros_Passos_Christian_Python.ipynb
ChristianEngProd/HTML_Teste
Preço por Região - Linha do Tempo
df_regiao_data = df.groupby(['Regiao','Data da Coleta'])['Valor de Venda'].mean().reset_index() df_regiao_data import seaborn as sns import matplotlib.pyplot as plt plt.figure(figsize = (20,5)) sns.lineplot(data = df_regiao_data, x = 'Data da Coleta', y = 'Valor de Venda', hue...
_____no_output_____
MIT
Primeiros_Passos_Christian_Python.ipynb
ChristianEngProd/HTML_Teste
Network Initializer What is neuron?Feed-forward neural networks are inspired by the information processing of one or more neural cells, called a neuron. A neuron accepts input signals via its dendrites, which pass the electrical signal down to the cell body. The axon carries the signal out to synapses, which are the c...
from random import random, seed def initialize_network(n_inputs, n_hidden, n_outputs): network = list() # Creating hidden layers according to the number of inputs hidden_layer = [{'weights': [random() for i in range(n_inputs + 1)]} for i in range(n_hidden)] network.append(hidden_layer) # Creating o...
[{'weights': [0.7887233511355132, 0.0938595867742349, 0.02834747652200631]}] [{'weights': [0.8357651039198697, 0.43276706790505337]}, {'weights': [0.762280082457942, 0.0021060533511106927]}]
Apache-2.0
003-forward-and-back-props/Backward Propagation.ipynb
wfraher/deeplearning
Forward propagateWe can calculate an output from a neural network by propagating an input signal through each layer until the output layer outputs its values.We can break forward propagation down into three parts:1. Neuron Activation.2. Neuron Transfer.3. Forward Propagation. 1. Neuron ActivationThe first step is to ...
# Implementation def activate(weights, inputs): activation = weights[-1] for i in range(len(weights) - 1): activation += weights[i] * inputs[i] return activation
_____no_output_____
Apache-2.0
003-forward-and-back-props/Backward Propagation.ipynb
wfraher/deeplearning
2. Neuron TransferOnce a neuron is activated, we need to transfer the activation to see what the neuron output actually is.Different transfer functions can be used. It is traditional to use the *sigmoid activation function*, but you can also use the *tanh* (hyperbolic tangent) function to transfer outputs. More recent...
from math import exp def transfer(activation): return 1.0 / (1.0 + exp(-activation))
_____no_output_____
Apache-2.0
003-forward-and-back-props/Backward Propagation.ipynb
wfraher/deeplearning
3. Forawrd propagate
# Foward propagate is self-explanatory def forward_propagate(network, row): inputs = row for layer in network: new_inputs = [] for neuron in layer: activation = activate(neuron['weights'], inputs) neuron['output'] = transfer(activation) new_inputs.append(neuro...
_____no_output_____
Apache-2.0
003-forward-and-back-props/Backward Propagation.ipynb
wfraher/deeplearning
Backpropagation What is it?1. Error is calculated between the expected outputs and the outputs forward propagated from the network.2. These errors are then propagated backward through the network from the output layer to the hidden layer, assigning blame for the error and updating weights as they go. This part is brok...
# Calulates the derivation from an neuron output def transfer_derivative(output): return output * (1.0 - output)
_____no_output_____
Apache-2.0
003-forward-and-back-props/Backward Propagation.ipynb
wfraher/deeplearning
Error Backpropagation1. calculate the error for each output neuron, this will give us our error signal (input) to propagate backwards through the network.error = (expected - output) * transfer_derivative(output)expected: expected output value for the neuronoutput: output value for the neuron and transfer_derivative()-...
def backward_propagate_error(network, expected): for i in reversed(range(len(network))): layer = network[i] errors = list() if i != len(network) - 1: for j in range(len(layer)): error = 0.0 for neuron in network[i + 1]: error +=...
[{'weights': [0.7887233511355132, 0.0938595867742349, 0.02834747652200631], 'output': 0.6936142046010635, 'delta': -0.011477619712406795}] [{'weights': [0.8357651039198697, 0.43276706790505337], 'output': 0.7335023968859138, 'delta': -0.1433825771158816}, {'weights': [0.762280082457942, 0.0021060533511106927], 'output'...
Apache-2.0
003-forward-and-back-props/Backward Propagation.ipynb
wfraher/deeplearning
Modeling and Simulation in PythonChapter 20Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
# Configure Jupyter so figures appear in the notebook %matplotlib inline # Configure Jupyter to display the assigned value after an assignment %config InteractiveShell.ast_node_interactivity='last_expr_or_assign' # import functions from the modsim.py module from modsim import *
_____no_output_____
MIT
code/soln/chap20soln.ipynb
arunkhattri/ModSimPy
Dropping penniesI'll start by getting the units we need from Pint.
m = UNITS.meter s = UNITS.second
_____no_output_____
MIT
code/soln/chap20soln.ipynb
arunkhattri/ModSimPy
And defining the initial state.
init = State(y=381 * m, v=0 * m/s)
_____no_output_____
MIT
code/soln/chap20soln.ipynb
arunkhattri/ModSimPy
Acceleration due to gravity is about 9.8 m / s$^2$.
g = 9.8 * m/s**2
_____no_output_____
MIT
code/soln/chap20soln.ipynb
arunkhattri/ModSimPy
When we call `odeint`, we need an array of timestamps where we want to compute the solution.I'll start with a duration of 10 seconds.
t_end = 10 * s
_____no_output_____
MIT
code/soln/chap20soln.ipynb
arunkhattri/ModSimPy
Now we make a `System` object.
system = System(init=init, g=g, t_end=t_end)
_____no_output_____
MIT
code/soln/chap20soln.ipynb
arunkhattri/ModSimPy
And define the slope function.
def slope_func(state, t, system): """Compute derivatives of the state. state: position, velocity t: time system: System object containing `g` returns: derivatives of y and v """ y, v = state unpack(system) dydt = v dvdt = -g return dydt, dvdt
_____no_output_____
MIT
code/soln/chap20soln.ipynb
arunkhattri/ModSimPy
It's always a good idea to test the slope function with the initial conditions.
dydt, dvdt = slope_func(init, 0, system) print(dydt) print(dvdt)
0.0 meter / second -9.8 meter / second ** 2
MIT
code/soln/chap20soln.ipynb
arunkhattri/ModSimPy
Now we're ready to call `run_ode_solver`
results, details = run_ode_solver(system, slope_func, max_step=0.5*s) details.message
_____no_output_____
MIT
code/soln/chap20soln.ipynb
arunkhattri/ModSimPy
Here are the results:
results
_____no_output_____
MIT
code/soln/chap20soln.ipynb
arunkhattri/ModSimPy
And here's position as a function of time:
def plot_position(results): plot(results.y, label='y') decorate(xlabel='Time (s)', ylabel='Position (m)') plot_position(results) savefig('figs/chap09-fig01.pdf')
Saving figure to file figs/chap09-fig01.pdf
MIT
code/soln/chap20soln.ipynb
arunkhattri/ModSimPy
Onto the sidewalkTo figure out when the penny hit the sidewalk, we can use `crossings`, which finds the times where a `Series` passes through a given value.
t_crossings = crossings(results.y, 0)
_____no_output_____
MIT
code/soln/chap20soln.ipynb
arunkhattri/ModSimPy
For this example there should be just one crossing, the time when the penny hits the sidewalk.
t_sidewalk = t_crossings[0] * s
_____no_output_____
MIT
code/soln/chap20soln.ipynb
arunkhattri/ModSimPy
We can compare that to the exact result. Without air resistance, we have$v = -g t$and$y = 381 - g t^2 / 2$Setting $y=0$ and solving for $t$ yields$t = \sqrt{\frac{2 y_{init}}{g}}$
sqrt(2 * init.y / g)
_____no_output_____
MIT
code/soln/chap20soln.ipynb
arunkhattri/ModSimPy
The estimate is accurate to about 10 decimal places. EventsInstead of running the simulation until the penny goes through the sidewalk, it would be better to detect the point where the penny hits the sidewalk and stop. `run_ode_solver` provides exactly the tool we need, **event functions**.Here's an event function th...
def event_func(state, t, system): """Return the height of the penny above the sidewalk. """ y, v = state return y
_____no_output_____
MIT
code/soln/chap20soln.ipynb
arunkhattri/ModSimPy
And here's how we pass it to `run_ode_solver`. The solver should run until the event function returns 0, and then terminate.
results, details = run_ode_solver(system, slope_func, events=event_func) details
_____no_output_____
MIT
code/soln/chap20soln.ipynb
arunkhattri/ModSimPy
The message from the solver indicates the solver stopped because the event we wanted to detect happened.Here are the results:
results
_____no_output_____
MIT
code/soln/chap20soln.ipynb
arunkhattri/ModSimPy
With the `events` option, the solver returns the actual time steps it computed, which are not necessarily equally spaced. The last time step is when the event occurred:
t_sidewalk = get_last_label(results) * s
_____no_output_____
MIT
code/soln/chap20soln.ipynb
arunkhattri/ModSimPy
Unfortunately, `run_ode_solver` does not carry the units through the computation, so we have to put them back at the end.We could also get the time of the event from `details`, but it's a minor nuisance because it comes packed in an array:
details.t_events[0][0] * s
_____no_output_____
MIT
code/soln/chap20soln.ipynb
arunkhattri/ModSimPy
The result is accurate to about 15 decimal places.We can also check the velocity of the penny when it hits the sidewalk:
v_sidewalk = get_last_value(results.v) * m / s
_____no_output_____
MIT
code/soln/chap20soln.ipynb
arunkhattri/ModSimPy
And convert to kilometers per hour.
km = UNITS.kilometer h = UNITS.hour v_sidewalk.to(km / h)
_____no_output_____
MIT
code/soln/chap20soln.ipynb
arunkhattri/ModSimPy
If there were no air resistance, the penny would hit the sidewalk (or someone's head) at more than 300 km/h.So it's a good thing there is air resistance. Under the hoodHere is the source code for `crossings` so you can see what's happening under the hood:
%psource crossings
_____no_output_____
MIT
code/soln/chap20soln.ipynb
arunkhattri/ModSimPy
The [documentation of InterpolatedUnivariateSpline is here](https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.InterpolatedUnivariateSpline.html).And you can read the [documentation of `scipy.integrate.solve_ivp`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.solve_ivp.html) t...
# Solution N = UNITS.newton kg = UNITS.kilogram m = UNITS.meter AU = UNITS.astronomical_unit # Solution r_0 = (1 * AU).to_base_units() v_0 = 0 * m / s init = State(r=r_0, v=v_0) # Solution r_earth = 6.371e6 * m r_sun = 695.508e6 * m system = System(init=init, G=6.674e-11 * N / kg**2 * m...
_____no_output_____
MIT
code/soln/chap20soln.ipynb
arunkhattri/ModSimPy
# !gdown --id 1LukOUfVNeps1Jj7Z27JbkmrO90jwBgie # !pip install kora # from kora import drive # drive.download_folder('1LukOUfVNeps1Jj7Z27JbkmrO90jwBgie') import shutil shutil.unpack_archive('mri.zip') # !ls /content/img_align_celeba
_____no_output_____
Apache-2.0
SRGAN_Final.ipynb
ashishpatel26/SRGAN-Keras
Load Libraries
!pip install scipy==1.1.0 import glob import time import matplotlib.pyplot as plt import numpy as np import tensorflow as tf from keras import Input from keras.applications import VGG19 from keras.callbacks import TensorBoard from keras.layers import BatchNormalization, Activation, LeakyReLU, Add, Dense from keras.lay...
_____no_output_____
Apache-2.0
SRGAN_Final.ipynb
ashishpatel26/SRGAN-Keras
Residual Block
def residual_block(x): """ Residual block """ filters = [64, 64] kernel_size = 3 strides = 1 padding = "same" momentum = 0.8 activation = "relu" res = Conv2D(filters=filters[0], kernel_size=kernel_size, strides=strides, padding=padding)(x) res = Activation(activation=activat...
_____no_output_____
Apache-2.0
SRGAN_Final.ipynb
ashishpatel26/SRGAN-Keras
Build Generator
def build_generator(): """ Create a generator network using the hyperparameter values defined below :return: """ residual_blocks = 16 momentum = 0.8 input_shape = (64, 64, 3) # Input Layer of the generator network input_layer = Input(shape=input_shape) # Add the pre-residual bl...
_____no_output_____
Apache-2.0
SRGAN_Final.ipynb
ashishpatel26/SRGAN-Keras
Build Descriminator
def build_discriminator(): """ Create a discriminator network using the hyperparameter values defined below :return: """ leakyrelu_alpha = 0.2 momentum = 0.8 input_shape = (256, 256, 3) input_layer = Input(shape=input_shape) # Add the first convolution block dis1 = Conv2D(filte...
_____no_output_____
Apache-2.0
SRGAN_Final.ipynb
ashishpatel26/SRGAN-Keras
Build VGG19
def build_vgg(): """ Build VGG network to extract image features """ input_shape = (256, 256, 3) # Load a pre-trained VGG19 model trained on 'Imagenet' dataset vgg = VGG19(include_top=False, weights='imagenet', input_shape=input_shape) vgg.outputs = [vgg.layers[20].output] # Create a K...
Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/vgg19/vgg19_weights_tf_dim_ordering_tf_kernels_notop.h5 80142336/80134624 [==============================] - 0s 0us/step Model: "model" _________________________________________________________________ Layer (type) Output...
Apache-2.0
SRGAN_Final.ipynb
ashishpatel26/SRGAN-Keras
Sample Images
def sample_images(data_dir, batch_size, high_resolution_shape, low_resolution_shape): # Make a list of all images inside the data directory all_images = glob.glob(data_dir) # Choose a random batch of images images_batch = np.random.choice(all_images, size=batch_size) low_resolution_images = [] ...
_____no_output_____
Apache-2.0
SRGAN_Final.ipynb
ashishpatel26/SRGAN-Keras
Save Images
def compute_psnr(original_image, generated_image): original_image = tf.convert_to_tensor(original_image, dtype = tf.float32) generated_image = tf.convert_to_tensor(generated_image, dtype = tf.float32) psnr = tf.image.psnr(original_image, generated_image, max_val = 1.0) return tf.math.redu...
_____no_output_____
Apache-2.0
SRGAN_Final.ipynb
ashishpatel26/SRGAN-Keras
Write a Log
from PIL import Image from skimage.metrics import structural_similarity as ssim
_____no_output_____
Apache-2.0
SRGAN_Final.ipynb
ashishpatel26/SRGAN-Keras
Final SRGAN Execution
losses = {'d_history' : [], "g_history": []} psnr = {'psnr_quality' : []} ssim = {'ssim_quality' : []} from tqdm.notebook import tqdm import warnings warnings.filterwarnings("ignore", category=DeprecationWarning) data_dir = "/content/train/*.*" os.makedirs("results", exist_ok=True) # os.makedirs("HR", exist_ok=True) #...
_____no_output_____
Apache-2.0
SRGAN_Final.ipynb
ashishpatel26/SRGAN-Keras
Diseño de software para cómputo científico---- Unidad 5: Integración con lenguajes de alto nivel con bajo nivel. Agenda de la Unidad 5- JIT (Numba)- **Cython.**- Integración de Python con FORTRAN.- Integración de Python con C. Recapitulando- Escribimos el código Python.- Pasamos todo a numpy.- Hicimos profile.- Par...
# vamos a hacer profiling import timeit import math # vamos a plotear %matplotlib inline import matplotlib.pyplot as plt import numpy as np
_____no_output_____
BSD-3-Clause
unidad5/01_Cython.ipynb
leliel12/diseno_sci_sfw
Numba vs Cython- Cython es un compilador estático/optimizador tanto para el lenguaje de programación Python como para el extenciones en Cython. - Hace que escribir extensiones C para Python sea tan ""fácil"" como el propio Python.- En lugar de analizar bytecode y generar IR, Cython usa un superconjunto de sintaxis de ...
def mandel(x, y, max_iters): """ Given the real and imaginary parts of a complex number, determine if it is a candidate for membership in the Mandelbrot set given a fixed number of iterations. """ i = 0 c = complex(x,y) z = 0.0j for i in range(max_iters): z = z * z + c ...
_____no_output_____
BSD-3-Clause
unidad5/01_Cython.ipynb
leliel12/diseno_sci_sfw
Ejemplo - Mandelbrot Fractal Python Puro
# creamos la imagen image = np.zeros((500 * 2, 750 * 2), dtype=np.uint8) # ejecutamos los calculos normal = %timeit -o create_fractal(-2.0, 1.0, -1.0, 1.0, image, 20) # mostramos todo plt.imshow(image, cmap="viridis");
4.09 s ± 22.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
BSD-3-Clause
unidad5/01_Cython.ipynb
leliel12/diseno_sci_sfw
Ejemplo - Mandelbrot Fractal Cython
!pip install Cython %load_ext Cython %%cython --annotate def mandel(x, y, max_iters): """ Given the real and imaginary parts of a complex number, determine if it is a candidate for membership in the Mandelbrot set given a fixed number of iterations. """ i = 0 c = complex(x,y) z = 0.0j ...
_____no_output_____
BSD-3-Clause
unidad5/01_Cython.ipynb
leliel12/diseno_sci_sfw
Ejemplo - Mandelbrot Fractal Cython
# creamos la imagen image = np.zeros((500 * 2, 750 * 2), dtype=np.uint8) # ejecutamos los calculos normal = %timeit -o create_fractal(-2.0, 1.0, -1.0, 1.0, image, 20) # mostramos todo plt.imshow(image, cmap="viridis");
3.41 s ± 64.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
BSD-3-Clause
unidad5/01_Cython.ipynb
leliel12/diseno_sci_sfw
Cython Hello World 1/2- Como Cython puede aceptar casi cualquier archivo fuente de Python válido, una de las cosas más difíciles para comenzar es descubrir cómo compilar su extensión.- Entonces, comencemos con el hola-mundo canónico de Python:```python helloworld.pyxprint("Hello World")```- Pueden ver el código resul...
import sys sys.path.insert(0, "./cython") import helloworld helloworld.__file__
Hello World
BSD-3-Clause
unidad5/01_Cython.ipynb
leliel12/diseno_sci_sfw
Cython - Números Primos
%%cython def primes(int nb_primes): cdef int n, i, len_p cdef int p[1000] if nb_primes > 1000: nb_primes = 1000 len_p = 0 # The current number of elements in p. n = 2 while len_p < nb_primes: # Is n prime? for i in p[:len_p]: if n % i == 0: ...
[2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97, 101, 103, 107, 109, 113, 127, 131, 137, 139, 149, 151, 157, 163, 167, 173, 179, 181, 191, 193, 197, 199, 211, 223, 227, 229, 233, 239, 241, 251, 257, 263, 269, 271, 277, 281, 283, 293, 307, 311, 313, 317, 331, 337, 347, 349...
BSD-3-Clause
unidad5/01_Cython.ipynb
leliel12/diseno_sci_sfw
Cython - Números Primos - Numpy
%%cython import numpy as np # importar donde vas a compilar def primes_np(int nb_primes): # Memoryview on a NumPy array narr = np.empty(nb_primes, dtype=np.dtype(int)) cdef long [:] narr_view = narr cdef long len_p = 0 # The current number of elements in p. cdef long n = 2 whil...
[ 2 3 5 ... 17383 17387 17389]
BSD-3-Clause
unidad5/01_Cython.ipynb
leliel12/diseno_sci_sfw
Cython - Números Primos - Profiling
%%cython --annotate import numpy as np # importar donde vas a compilar cdef primes_np(unsigned int nb_primes): # Memoryview on a NumPy array narr = np.empty(nb_primes, dtype=np.dtype(int)) cdef long [:] narr_view = narr cdef long len_p = 0 # The current number of elements in p. cdef long...
_____no_output_____
BSD-3-Clause
unidad5/01_Cython.ipynb
leliel12/diseno_sci_sfw
Y si usamos la librería vector de C++
%%cython --cplus from libcpp.vector cimport vector def primes_cpp(unsigned int nb_primes): cdef int n, i cdef vector[int] p p.reserve(nb_primes) # allocate memory for 'nb_primes' elements. n = 2 while p.size() < nb_primes: # size() for vectors is similar to len() for i in p: ...
_____no_output_____
BSD-3-Clause
unidad5/01_Cython.ipynb
leliel12/diseno_sci_sfw
Benchmarks
%timeit primes(1000) %timeit primes_np(1000) %timeit primes_cpp(1000)
2.3 ms ± 58.9 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) 113 ms ± 1.35 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) 2.29 ms ± 19.2 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
BSD-3-Clause
unidad5/01_Cython.ipynb
leliel12/diseno_sci_sfw
Integrando C puro con Cython- Supongamos que tenemos ya escrito este super complejo codigo C en un archivo que se llama `hello_c.c`.```Cinclude void f();void f() { printf("%s", "Hello world from a pure C function!\n");}```Y queremos integrarlo a -Python- Hay que hacer el wrapper `hello_cwrapper.pyx`.```cythoncdef ...
%%cython -I ./cython/ cdef extern from "hello_c.c": void f() cpdef myf(): f() myf() ## ESTO IMPRIME SI O SI A LA CONSOLA
_____no_output_____
BSD-3-Clause
unidad5/01_Cython.ipynb
leliel12/diseno_sci_sfw
SetupImport the standard Python Libraries that are used in this lab.
import boto3 from time import sleep import subprocess import pandas as pd import json import time
_____no_output_____
Apache-2.0
forecast/1.Getting_Data_Ready(Revenue).ipynb
veerathp/forecastimmersionday
Import sagemaker and get execution role for getting role ARN
import sagemaker region = boto3.Session().region_name smclient = boto3.Session().client('sagemaker') from sagemaker import get_execution_role role_arn = get_execution_role() print(role_arn) #Make sure this role has the forecast permissions set to be able to use S3
arn:aws:iam::226154724374:role/service-role/AmazonSageMaker-ExecutionRole-wkshop
Apache-2.0
forecast/1.Getting_Data_Ready(Revenue).ipynb
veerathp/forecastimmersionday
The last part of the setup process is to validate that your account can communicate with Amazon Forecast, the cell below does just that.
session = boto3.Session(region_name='us-east-1') forecast = session.client(service_name='forecast') forecastquery = session.client(service_name='forecastquery')
_____no_output_____
Apache-2.0
forecast/1.Getting_Data_Ready(Revenue).ipynb
veerathp/forecastimmersionday
Data Prepraration
df = pd.read_csv("../data/COF_yearly_Revenue_Data.csv", dtype = object, names=['metric_name','timestamp','metric_value']) df.head(3)
_____no_output_____
Apache-2.0
forecast/1.Getting_Data_Ready(Revenue).ipynb
veerathp/forecastimmersionday
Create the training set and validation set. Use the last years revenue as the validation set
# Select 1996 to 2017 in one data frame df_1996_2017 = df[(df['timestamp'] >= '1995-12-31') & (df['timestamp'] <= '2017-12-31')] # Select the year 2018 seprately for validation df = pd.read_csv("../data/COF_yearly_Revenue_Data.csv", dtype = object, names=['metric_name','timestamp','metric_value']) df_2018 = df[(df['ti...
_____no_output_____
Apache-2.0
forecast/1.Getting_Data_Ready(Revenue).ipynb
veerathp/forecastimmersionday
Now export them to CSV files and place them into your data folder.
df_1996_2017.to_csv("../data/cof-revenue-train.csv", header=False, index=False) df_2018.to_csv("../data/cof-revenue-validation.csv", header=False, index=False)
_____no_output_____
Apache-2.0
forecast/1.Getting_Data_Ready(Revenue).ipynb
veerathp/forecastimmersionday
Define the S3 bucket name where we will upload data where Amazon Forecast will pick up the data later
bucket_name = "sagemaker-capone-forecast-useast1-03" # Rember to change this to the correct bucket name used for Capital One folder_name = "cone" # change this to the folder name of the user.
_____no_output_____
Apache-2.0
forecast/1.Getting_Data_Ready(Revenue).ipynb
veerathp/forecastimmersionday
Upload the data to S3
s3 = session.client('s3') key=folder_name+"/cof-revenue-train.csv" s3.upload_file(Filename="../data/cof-revenue-train.csv", Bucket=bucket_name, Key=key)
_____no_output_____
Apache-2.0
forecast/1.Getting_Data_Ready(Revenue).ipynb
veerathp/forecastimmersionday
Creating the Dataset Group and Dataset In Amazon Forecast , a dataset is a collection of file(s) which contain data that is relevant for a forecasting task. A dataset must conform to a schema provided by Amazon Forecast. More details about `Domain` and dataset type can be found on the [documentation](https://docs.aws....
DATASET_FREQUENCY = "Y" TIMESTAMP_FORMAT = "yyyy-mm-dd" project = 'cof_revenue_forecastdemo' datasetName= project+'_ds' datasetGroupName= project +'_dsg' s3DataPath = "s3://"+bucket_name+"/"+key
_____no_output_____
Apache-2.0
forecast/1.Getting_Data_Ready(Revenue).ipynb
veerathp/forecastimmersionday
Create the Dataset Group
create_dataset_group_response = forecast.create_dataset_group(DatasetGroupName=datasetGroupName, Domain="METRICS", ) datasetGroupArn = create_dataset_group_response['DatasetGroupArn'] forecast.desc...
_____no_output_____
Apache-2.0
forecast/1.Getting_Data_Ready(Revenue).ipynb
veerathp/forecastimmersionday
Create the Schema
# Specify the schema of your dataset here. Make sure the order of columns matches the raw data files. schema ={ "Attributes":[ { "AttributeName":"metric_name", "AttributeType":"string" }, { "AttributeName":"timestamp", "AttributeType":"timestamp" }, {...
_____no_output_____
Apache-2.0
forecast/1.Getting_Data_Ready(Revenue).ipynb
veerathp/forecastimmersionday
Create the Dataset
response=forecast.create_dataset( Domain="METRICS", DatasetType='TARGET_TIME_SERIES', DatasetName=datasetName, DataFrequency=DATASET_FREQUENCY, Schema = schema ) datasetArn = response['DatasetArn'] forecast.describe_dat...
_____no_output_____
Apache-2.0
forecast/1.Getting_Data_Ready(Revenue).ipynb
veerathp/forecastimmersionday
Add Dataset to Dataset Group
forecast.update_dataset_group(DatasetGroupArn=datasetGroupArn, DatasetArns=[datasetArn])
_____no_output_____
Apache-2.0
forecast/1.Getting_Data_Ready(Revenue).ipynb
veerathp/forecastimmersionday
Create Data Import JobNow that Forecast knows how to understand the CSV we are providing, the next step is to import the data from S3 into Amazon Forecaast.
datasetImportJobName = 'EP_DSIMPORT_JOB_TARGET' ds_import_job_response=forecast.create_dataset_import_job(DatasetImportJobName=datasetImportJobName, DatasetArn=datasetArn, DataSource= { ...
arn:aws:forecast:us-east-1:457927431838:dataset-import-job/cof_revenue_forecastdemo_ds/EP_DSIMPORT_JOB_TARGET
Apache-2.0
forecast/1.Getting_Data_Ready(Revenue).ipynb
veerathp/forecastimmersionday
Check the status of dataset, when the status change from **CREATE_IN_PROGRESS** to **ACTIVE**, we can continue to next steps. Depending on the data size. It can take 10 mins to be **ACTIVE**. This process will take 5 to 10 minutes.
while True: dataImportStatus = forecast.describe_dataset_import_job(DatasetImportJobArn=ds_import_job_arn)['Status'] print(dataImportStatus) if dataImportStatus != 'ACTIVE' and dataImportStatus != 'CREATE_FAILED': sleep(30) else: break forecast.describe_dataset_import_job(DatasetImportJo...
DatasetArn: arn:aws:forecast:us-east-1:457927431838:dataset-group/cof_revenue_forecastdemo_dsg
Apache-2.0
forecast/1.Getting_Data_Ready(Revenue).ipynb
veerathp/forecastimmersionday
def nn_topology(num_layers, nodes_per_layer, connections): to-do
topology = np.array([ [ 'input', [ ['x', '-'], ['y', '-'], ['classe', '-'] ], ], [ 'n1', [ ['w10', 1], ['w11', '-'], ['w12', '-'], 0, # delta1 0 # o1 ] ], [ 'n2', ...
epoch:1 epoch:2 epoch:3 epoch:4 epoch:5 epoch:6 epoch:7 epoch:8 epoch:9 epoch:10 epoch:11 epoch:12 epoch:13 epoch:14 epoch:15 epoch:16 epoch:17 epoch:18 epoch:19 epoch:20 epoch:21 epoch:22 epoch:23 epoch:24 epoch:25 epoch:26 epoch:27 epoch:28 epoch:29 epoch:30 epoch:31 epoch:32 epoch:33 epoch:34 epoch:35 epoch:36
MIT
lab4/notebooks/50046-nn.ipynb
brun0vieira/psn
VER MELHOR O MODELO NÃO TREINADO
draw_plot(topology_not_fitted, out1_not_fitted, out2_not_fitted, out3_not_fitted, 'Modelo Não Treinado') draw_plot(topology, out1, out2, out3, 'Modelo Treinado (Erro zero)')
_____no_output_____
MIT
lab4/notebooks/50046-nn.ipynb
brun0vieira/psn
7章 畳み込みニューラルネットワーク 7.1 全体の構造CNNはニューラルネットワークと同様、複数のレイヤを組み合わせて作成する。CNNでは新たに「Convolutionレイヤ(畳み込み層)」と「Poolingレイヤ(プーリング層)」が登場する。これまで出てきたニューラルネットワークは隣接する層の全てのニューロン間を結合する全結合(fully-connected)であり、Affineレイヤと言う名前で実装してきた。例として全結合のニューラルネットワークでは「Affineレイヤ→活性化関数ReLUレイヤ」の組み合わせを1層として複数層で構築し、出力層にはSoftmaxレイヤを用いていた。CNNでは「Convolutionレイヤ→Re...
_____no_output_____
MIT
notebooks/section7.ipynb
kamujun/exercise_of_deep_larning_from_scratch
Final Code
import pandas as pd import numpy as np from sklearn.neighbors import NearestNeighbors from sklearn.metrics.pairwise import cosine_similarity from scipy.sparse import csr_matrix from sklearn import metrics from sklearn.preprocessing import StandardScaler ''' This cell reads in the data needed for the model. The two fil...
_____no_output_____
Unlicense
Netflix Recommended Movies/DSC 630 Final Code.ipynb
Lemonchasers/Lemonchasers.github.io
Decision trees example Continuous output example: A prediction model that states the motor references. Different decision trees are created according to the required selection criteria. **Step 1**: Import the required libraries.
# import numpy package for arrays and stuff import numpy as np # import matplotlib.pyplot for plotting our result import matplotlib.pyplot as plt # import pandas for importing csv files import pandas as pd
_____no_output_____
Apache-2.0
notebooks/decision trees/.ipynb_checkpoints/decision_trees_3D_strategy_v2-checkpoint.ipynb
aitorochotorena/multirotor-all
Import the file `predicted_values_Dt.py` containing the decision trees algorithms
import sys sys.path.insert(0, 'decision trees') from predicted_values_DT import *
_____no_output_____
Apache-2.0
notebooks/decision trees/.ipynb_checkpoints/decision_trees_3D_strategy_v2-checkpoint.ipynb
aitorochotorena/multirotor-all
Read the dataframe for references of motors:
# import dataset # dataset = pd.read_csv('Data.csv') # alternatively open up .csv file to read data import pandas as pd import matplotlib.pyplot as plt path='./Motors/' df = pd.read_csv(path+'Non-Dominated-Motors.csv', sep=';') df = df[['Tnom_Nm','Kt_Nm_A','r_omn','weight_g']] # we select the first five rows df.h...
_____no_output_____
Apache-2.0
notebooks/decision trees/.ipynb_checkpoints/decision_trees_3D_strategy_v2-checkpoint.ipynb
aitorochotorena/multirotor-all
Calculated values: Example. Code: Tnom=2.2 Nm,Kt=?, R=?Criteria:- Torque: select the next ref.- Kt: select the nearest ref.- Resistance: select the nearest ref. **1D decision tree** Torque: Once the value of the torque in the optimization code is calculated, we will create a 1D decision tree that selects the higher va...
df_X=pd.DataFrame(df.iloc[:,0]) # column torque df_y=df.iloc[:,0] # column of torque df_X=pd.DataFrame(df_X) xy = pd.concat([df_X,df_y],axis=1) sorted_xy = np.unique(xy,axis=0) #axis X frames=[] for i in range(len(df_X.columns)): # a vector of supplementary points around the reference value to force the regressio...
_____no_output_____
Apache-2.0
notebooks/decision trees/.ipynb_checkpoints/decision_trees_3D_strategy_v2-checkpoint.ipynb
aitorochotorena/multirotor-all
If the calculated value was 2.2 Nm, the predicted one is:
regressorNext.predict(np.array([[2.2]]))
_____no_output_____
Apache-2.0
notebooks/decision trees/.ipynb_checkpoints/decision_trees_3D_strategy_v2-checkpoint.ipynb
aitorochotorena/multirotor-all
**2D decision tree** With this new predicted value of torque, we will estimate the best Kt constant of the catalogue.For that, we construct a decision tree centered on the reference, which takes as input the torque and as output, the Kt constant:
from sklearn.tree import DecisionTreeRegressor df_X=pd.DataFrame(df.iloc[:,0]) df_y=df.iloc[:,1] # create a regressor object (https://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeRegressor.html) regressorAver = DecisionTreeRegressor(criterion='mse', max_depth=None, max_features=1, max_...
_____no_output_____
Apache-2.0
notebooks/decision trees/.ipynb_checkpoints/decision_trees_3D_strategy_v2-checkpoint.ipynb
aitorochotorena/multirotor-all
Estimated value: (Tnom=3.2003048 Nm/A), the nearest Kt in the dataframe is:
# average_DT(df.iloc[:,0:2],df.iloc[:,2],np.array([[]])) regressorAver.predict(np.array([[3.2003048]]))
_____no_output_____
Apache-2.0
notebooks/decision trees/.ipynb_checkpoints/decision_trees_3D_strategy_v2-checkpoint.ipynb
aitorochotorena/multirotor-all
**3D Decision Tree** In the file `predicted_values_DT.py` we have developed different algorithms which construct decision trees based on the previous reference (previous_DT), on the next references (next_DT) or centered on the reference (average_DT). Considering we have previously obtained the values of Kt and Tnom, a ...
average_DT(df[['Tnom_Nm','Kt_Nm_A']],df['r_omn'],np.array([[3.2003048,0.05161782]]))
_____no_output_____
Apache-2.0
notebooks/decision trees/.ipynb_checkpoints/decision_trees_3D_strategy_v2-checkpoint.ipynb
aitorochotorena/multirotor-all
**Visualizing 3D decision tree in scikit-learn**
from IPython.display import Image from sklearn.externals.six import StringIO import pydot from sklearn import tree df_X=df[['Tnom_Nm','Kt_Nm_A']] df_y=df['r_omn'] # create a regressor object (https://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeRegressor.html) regressorAver = DecisionTreeR...
_____no_output_____
Apache-2.0
notebooks/decision trees/.ipynb_checkpoints/decision_trees_3D_strategy_v2-checkpoint.ipynb
aitorochotorena/multirotor-all
This is a sketch for Adversarial images in MNIST
import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets('/tmp/tensorflow/mnist/input_data', one_hot=True) import seaborn as sns sns.set_style('white') colors_list = sns.color_palette("Paired", 10)
_____no_output_____
Apache-2.0
notebook/AdversarialMNIST_sketch.ipynb
tiddler/AdversarialMNIST
recreate the network structure
x = tf.placeholder(tf.float32, shape=[None, 784]) y_ = tf.placeholder(tf.float32, shape=[None, 10]) def weight_variable(shape): initial = tf.truncated_normal(shape, stddev=0.1) return tf.Variable(initial) def bias_variable(shape): initial = tf.constant(0.1, shape=shape) return tf.Variable(initial) def conv2d...
_____no_output_____
Apache-2.0
notebook/AdversarialMNIST_sketch.ipynb
tiddler/AdversarialMNIST
Load previous model
model_path = './MNIST.ckpt' sess = tf.InteractiveSession() sess.run(tf.global_variables_initializer()) tf.train.Saver().restore(sess, model_path) import matplotlib.pyplot as plt import numpy as np %matplotlib inline
_____no_output_____
Apache-2.0
notebook/AdversarialMNIST_sketch.ipynb
tiddler/AdversarialMNIST
Extract some "2" images from test set
index_mask = np.where(mnist.test.labels[:, 2])[0] subset_mask = np.random.choice(index_mask, 10) subset_mask origin_images = mnist.test.images[subset_mask] origin_labels = mnist.test.labels[subset_mask] origin_labels prediction=tf.argmax(y_pred,1) prediction_val = prediction.eval(feed_dict={x: origin_images, keep_prob:...
_____no_output_____
Apache-2.0
notebook/AdversarialMNIST_sketch.ipynb
tiddler/AdversarialMNIST
one Adversarial vs one image
eta = 0.5 iter_num = 10
_____no_output_____
Apache-2.0
notebook/AdversarialMNIST_sketch.ipynb
tiddler/AdversarialMNIST