markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
<h3>CDF and PS Plots illustrating Albedo</h3>
# Run function and plot fig = plt.figure(figsize=(10,5)) ax = fig.add_subplot(121) ind = 0 # Index for Albedo X1 = cdf_info_array[0,:,ind] F1 = cdf_info_array[1,:,ind] plt.title('CDF %s'%file_list[ind][0:-4]) ax.set_ylabel(file_list[ind][0:-4]) ax.set_xlabel('fraction of pixels') ax.set_xlim([0,1]) ax.plot(F1,X1) ax = fig.add_subplot(122) ind = 0 X1 = cdf_info_array[0,:,ind] #replace with ps F1 = cdf_info_array[1,:,ind] plt.title('FFT %s'%file_list[ind][0:-4]) plt.ylabel(file_list[ind][0:-4]) ax.plot(F1,X1, color='green') plt.show() #sub.plt
2017 NEON DI JAMB Presentation.ipynb
yetiracing4als/2017-NEON-DI-JAMB
bsd-2-clause
<h3>CDF and PS Plots illustrating Canopy Height</h3>
# Run function and plot fig = plt.figure(figsize=(10,5)) ax = fig.add_subplot(121) ind = 3 # Index for Canopy Height X1 = cdf_info_array[0,:,ind] F1 = cdf_info_array[1,:,ind] plt.title('CDF %s'%file_list[ind][0:-4]) ax.set_ylabel(file_list[ind][0:-4]) ax.set_xlabel('fraction of pixels') ax.set_xlim([0,1]) ax.plot(F1,X1) ax = fig.add_subplot(122) ind = 0 X1 = cdf_info_array[0,:,ind] #replace with ps F1 = cdf_info_array[1,:,ind] plt.title('FFT %s'%file_list[ind][0:-4]) ax.set_ylabel(file_list[ind][0:-4]) ax.set_xlabel('fraction') ax.plot(F1,X1, color='red') plt.show() # Run function and plot fig = plt.figure(figsize=(10,5)) ax = fig.add_subplot(121) ind = 14 # Index for NDVI X1 = cdf_info_array[0,:,ind] F1 = cdf_info_array[1,:,ind] plt.title('CDF %s'%file_list[ind][0:-4]) ax.set_ylabel(file_list[ind][0:-4]) ax.set_xlabel('fraction of pixels') ax.set_xlim([0,1]) ax.plot(F1,X1) ax = fig.add_subplot(122) ind = 0 X1 = cdf_info_array[0,:,ind] #replace with ps F1 = cdf_info_array[1,:,ind] plt.title('FFT %s'%file_list[ind][0:-4]) ax.set_ylabel(file_list[ind][0:-4]) ax.set_xlabel('fraction') ax.plot(F1,X1, color='red') plt.show()
2017 NEON DI JAMB Presentation.ipynb
yetiracing4als/2017-NEON-DI-JAMB
bsd-2-clause
Lectura del video y conversión a arreglo de Numpy Ahora vamos a leer frame a frame del video y convertirlo a un arreglo Numpy de 4 dimensiones (cantFrames, filas, columnas, canales).
# convertir el video a un arreglo numpy print("Convirtiendo video a Numpy array...") # dimensiones cant_frames= vid_reader.get_length() dimensiones = (cant_frames, mdata['source_size'][1], mdata['source_size'][0], 3) # se crea un arreglo numpy de 4 dimensiones (nframes, filas, columnas, 3) video_np= np.zeros(dimensiones) # lista con los frames del video. Cada frame es una imagen lista_video= list(vid_reader) # iterar por todas las imagenes del video for i in range(cant_frames): video_np[i,:,:,:]=lista_video[i] # convertir a rango 0-1 video_np = video_np/255 # cerrar lector de video vid_reader.close() # mostrar una imagen para corroborar el video plt.imshow(video_np[0,:,:,:]) print("...Listo")
courses/images/res/material_resuelto/05_video/Tutorial de Localización en videos.ipynb
facundoq/facundoq.github.io
gpl-2.0
Visualización del video Visualizamos algunos frames del video para verificar que se cargó correctamente.
# Definimos una función para visualizar un video en una figura def ver_video(video, cant_frames): # para que Jupyter haga el plot en una nueva figura %matplotlib qt5 plt.figure(figsize=(12,9)) # cuidado: dimensiones en PULGADAS for i in range(cant_frames): plt.imshow(video[i]) plt.show() plt.pause(0.005) # retorna el control de las figuras "inline" %matplotlib inline # llamamos a la función con el arreglo Numpy como parámetro para ver solo 25 frames ver_video(video_np, 25)
courses/images/res/material_resuelto/05_video/Tutorial de Localización en videos.ipynb
facundoq/facundoq.github.io
gpl-2.0
Procesamiento del video de forma matricial Ahora que tenemos el video cargado como un arreglo Numpy, sólo debemos procesarlo avanzando frame por frame. Entonces, lo que haremos será recorrer cada frame, filtrando las imágenes como lo hicimos en el tutorial anterior. Para mayor modularización y legibilidad del código, usaremos funciones de aquí en adelante. En primer lugar, definamos las funciones para filtrar por color, calcular centro de masa, e incrustar un objeto. Filtrado de color Comenzamos una pequeña variante del filtrado por color que ya vimos: ahora filtrando de forma matricial.
import skimage.morphology # para erosionar import matplotlib.colors # para convertir a HSV # esta función recibe una imagen RGB, #segmenta el objeto con rango de colores limites_HSV y # retorna una imagen binaria con el resultado def filtrar_imagen(img_rgb, limites_HSV): h_min,h_max= limites_HSV[0] s_min,s_max= limites_HSV[1] v_min,v_max= limites_HSV[2] # inits h,w,c= img_rgb.shape segmentation_mask= np.zeros((h,w)) # transformar espacio de color img_hsv = matplotlib.colors.rgb_to_hsv(img_rgb) # segmentar cada canal (se realiza de forma matricial) segmentation_mask_h= np.logical_and(img_hsv[:,:,0]> h_min, img_hsv[:,:,0]< h_max) segmentation_mask_s= np.logical_and(img_hsv[:,:,1]> s_min, img_hsv[:,:,1]< s_max) segmentation_mask_v= np.logical_and(img_hsv[:,:,2]> v_min, img_hsv[:,:,2]< v_max) # unir las 3 máscaras segmentation_mask= np.logical_and(segmentation_mask_h, segmentation_mask_s, segmentation_mask_v) # erosionar con skimage segmentation_mask= skimage.morphology.binary_erosion(segmentation_mask) return segmentation_mask
courses/images/res/material_resuelto/05_video/Tutorial de Localización en videos.ipynb
facundoq/facundoq.github.io
gpl-2.0
Cálculo de centro de masa También lo hacemos de forma matricial.
# calcular centro de masa de la imagen pasada como argumento # retorna un vector numpy de dos elementos def calcular_centro_de_masa(mask_img): r,c=np.where(mask_img>0) #calculo las posiciones de los pixeles con valor 1 # El elemento `i` de r tiene la fila del pixel i con valor 1. Lo mismo para c con la columna # r y c tienen tamaño = pixeles en blanco x 1 coordinates=np.vstack((r,c))# creo una matriz de pixeles x 2, juntando el vector de filas con el de columnas #coordinates tiene tamaño = pixeles en blanco x 2 masa_center_position=np.mean(coordinates,axis=1) #calculo la coordenada promedio entre las seleccionadas masa_center_position=masa_center_position.round().astype(int) # redondeo y convierto a entero el resultado # para que sea una coordenada return masa_center_position
courses/images/res/material_resuelto/05_video/Tutorial de Localización en videos.ipynb
facundoq/facundoq.github.io
gpl-2.0
Incrustación de una imagen dentro de otra. De nuevo, proveemos una versión matricial del mismo algoritmo.
def dibujar_objeto_en_imagen(img, objeto, posicion): h_obj, w_obj, c= objeto.shape h_img, w_img, c= img.shape dim_obj=np.array([h_obj, w_obj]) #dimension # comienzo tiene las coordenadas de la esquina superior izquierda comienzo= np.array(posicion) - dim_obj//2 # fin tiene las coordenadas de la esquina inferior derecha fin=comienzo+dim_obj # Ajustamos comienzo y fin por si están fuera de la imagen # al mismo tiempo achicamos la imagen del objeto si es necesario # para remover las partes que no van dentro de la pantalla if (comienzo[0]<0): extra=-comienzo[0] objeto=objeto[extra:,:,:] comienzo[0]=0 if (comienzo[1]<0): extra=-comienzo[1] objeto=objeto[:,extra:,:] comienzo[1]=0 if (fin[0]>=h_img): extra=fin[0]-h_img objeto=objeto[:-extra,:,:] fin[0]=h_img if (fin[1]>=w_img): extra=fin[1]-w_img objeto=objeto[:,:-extra,:] fin[1]=w_img # una vez que tenemos estas coordenadas, creamos una vista a la submatriz # de la imagen original donde vamos a pegar el objeto vista_submatriz_objetivo=img[comienzo[0]:fin[0],comienzo[1]:fin[1],:] # calculamos la intensidad de cada pixel de la imagen del objeto intensidad_objeto=np.mean(objeto,axis=2) # tanto la vista, como el objeto, como la matriz de intensidad tienen las mismas dimensiones # espaciales (ancho y alto) #modificamos la vista, solo en las posiciones en donde la intensidad del objeto es mayor a 0.1, # es decir, donde el objeto no es negro o muy oscuro. vista_submatriz_objetivo[intensidad_objeto>0.1,:]=objeto[intensidad_objeto>0.1,:]
courses/images/res/material_resuelto/05_video/Tutorial de Localización en videos.ipynb
facundoq/facundoq.github.io
gpl-2.0
Función para procesar un frame Al igual que antes, unimos las tres funciones anteriores para procesar un frame.
#Recibe: #frame: la imagen a procesar (por referencia) #limites_HSV: los rangos de color para encontrar el guante #img_objeto: la imagen del objeto que se va a superponer en la posición del guante def procesar_frame(frame,limites_HSV, img_objeto): #IMPLEMENTAR - COMIENZO #1) Calcular máscara de segmentación mascara= filtrar_imagen(frame, limites_HSV) #2) calcular centro de masa posicion_del_objeto= calcular_centro_de_masa(mascara) #3) colocar objeto en la imagen. Recordar que los parámetros en Python pasan por referencia dibujar_objeto_en_imagen(frame, img_objeto, posicion_del_objeto) #IMPLEMENTAR - FIN
courses/images/res/material_resuelto/05_video/Tutorial de Localización en videos.ipynb
facundoq/facundoq.github.io
gpl-2.0
Procesamiento del video Ahora que tenés todas las funciones necesarias definidas, procesá el video frame a frame.
import skimage.io # para abrir imagenes # procesar el video frame a frame segmentando el guante de color rosa def procesar_video(video_np, limites_HSV, img_objeto): # copia del video original video_procesado= np.copy(video_np) # procesar todos los frames cant_frames= video_np.shape[0] for i in range(cant_frames): procesar_frame(video_procesado[i,:,:,:],limites_HSV,img_objeto) if i%10==0: #cada 10 frames mostrar un mensaje para ver como va print(" ...frame", i, "completado.") return video_procesado # segmentos HSV para guante rosa LIMITES_H = (230/255,250/255) LIMITES_S = (170/255,240/255) LIMITES_V = (40/255,170/255) LIMITES_HSV = (LIMITES_H, LIMITES_S, LIMITES_V) # objeto a incrustar img_objeto= skimage.io.imread("fuego.jpg") /255 # procesamos el video print("Procesando video:") video_procesado= procesar_video(video_np, LIMITES_HSV, img_objeto) print("...Listo!") # visualizar resultado ver_video(video_procesado, 50)
courses/images/res/material_resuelto/05_video/Tutorial de Localización en videos.ipynb
facundoq/facundoq.github.io
gpl-2.0
Guardar video Por último, podemos guardar el video generado para tenerlo como archivo mp4.
def save_video(video_np, file_name): # abrir un writer video vid_writer = imageio.get_writer(file_name) # iterar sobre todos los frames for i in range(video_np.shape[0]): vid_writer.append_data(video_np[i]*255) # reconvertimos a escala 0-255 # cerrar writer vid_writer.close() # guardar video print("Guardando video...") save_video(video_procesado, "nuevo_video.mp4") print("...Listo!")
courses/images/res/material_resuelto/05_video/Tutorial de Localización en videos.ipynb
facundoq/facundoq.github.io
gpl-2.0
Using same width and height triggers the scroll bar
import folium width, height = 480, 350 fig = Figure(width=width, height=height) m = folium.Map( location=location, tiles=tiles, width=width, height=height, zoom_start=zoom_start ) fig.add_child(m) fig
examples/WidthHeight.ipynb
ocefpaf/folium
mit
Can figure take relative sizes?
width, height = "100%", 350 fig = Figure(width=width, height=height) m = folium.Map( location=location, tiles=tiles, width=width, height=height, zoom_start=zoom_start ) fig.add_child(m) fig
examples/WidthHeight.ipynb
ocefpaf/folium
mit
I guess not. (Well, it does make sense for a single HTML page, but not for iframes.)
width, height = 480, "100%" fig = Figure(width=width, height=height) m = folium.Map( location=location, tiles=tiles, width=width, height=height, zoom_start=zoom_start ) fig.add_child(m) fig
examples/WidthHeight.ipynb
ocefpaf/folium
mit
Not that Figure is interpreting this as 50px. We should raise something and be explicit on the docs.
width, height = "50%", "100%" fig = Figure(width=width, height=height) m = folium.Map( location=location, tiles=tiles, width=width, height=height, zoom_start=zoom_start ) fig.add_child(m) fig width, height = "150%", "100%" try: folium.Map( location=location, tiles=tiles, width=width, height=height, zoom_start=zoom_start, ) except ValueError as e: print(e) width, height = "50%", "80p" try: folium.Map( location=location, tiles=tiles, width=width, height=height, zoom_start=zoom_start, ) except ValueError as e: print(e) width, height = width, height = 480, -350 try: folium.Map( location=location, tiles=tiles, width=width, height=height, zoom_start=zoom_start, ) except ValueError as e: print(e)
examples/WidthHeight.ipynb
ocefpaf/folium
mit
Maybe we should recommend
width, height = 480, 350 fig = Figure(width=width, height=height) m = folium.Map( location=location, tiles=tiles, width="100%", height="100%", zoom_start=zoom_start ) fig.add_child(m) fig
examples/WidthHeight.ipynb
ocefpaf/folium
mit
<br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> Now to a real world example! <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> Scotch! <img src="https://c1.staticflickr.com/7/6184/6105844311_dc4c31b8b7_b.jpg" title="Scotch whiskies" width="716" height="403" /> Source: Damien Pollet <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> First look at the data Data from: Whisky Classified
# Read the data file and drop the collumns we don't care about: whisky_dataframe = pd.read_csv( filepath_or_buffer="whiskies.csv", header=0, sep=',', index_col=1) whisky_dataframe.drop(['RowID', 'Postcode', ' Latitude', ' Longitude'], inplace=True, axis=1) whisky_dataframe.head(10)
Tutorial Notebook Development.ipynb
Joao-M-Almeida/ML-Tutorial
mit
<br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> Feature selection and extraction <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> PCA
random_data_1 =np.random.multivariate_normal( mean= [0, 0], cov=[[5, 5], [0, 0.5]], size=100) random_data_2 =np.random.multivariate_normal( mean= [6, 6], cov=[[5, 5], [0, 0.5]], size=100) random_data = np.concatenate([random_data_1, random_data_2], axis=0) random_labels = np.concatenate([np.ones((100,1)),np.zeros((100,1))], axis=0) fig = plt.figure(figsize=(8, 8)) plt.scatter(random_data[:, 0], random_data[:, 1], c=random_labels, cmap=cmap_light) #plt.scatter(random_data_2[:, 0], random_data_2[:, 1], c='r') plt.plot([-5, 10], [-5, 10], 'r--') plt.plot([5, 0], [0, 5], 'g--') plt.xlim((-7, 14)) plt.ylim((-7, 14)) plt.title('Random Data with Principal Components', fontsize=16) plt.xlabel('Random Dimension 1', fontsize=14) plt.ylabel('Random Dimension 2', fontsize=14) plt.show() pca = PCA(n_components=2) pca.fit(random_data) transformed_data = pca.fit_transform(random_data) plt.figure(figsize=(8,6)) plt.scatter(transformed_data[:,0], transformed_data[:,1], c=random_labels, cmap=cmap_light) plt.plot([-10, 10], [0, 0], 'r--') plt.xlim((-10, 10)) plt.ylim((-5, 5)) plt.title('Transformed Random Data', fontsize=16) plt.xlabel('Random Dimension 1', fontsize=14) plt.ylabel('Random Dimension 2', fontsize=14) plt.show() pca = PCA(n_components=1) pca.fit(random_data) transformed_data = pca.fit_transform(random_data) plt.figure(figsize=(8,5)) plt.scatter(transformed_data[:,0], np.zeros((200,1)), c=random_labels, cmap=cmap_light) plt.plot([-10, 10], [0, 0], 'r--') plt.xlim((-10, 10)) plt.ylim((-5, 5)) plt.title('Transformed Random Data', fontsize=16) plt.xlabel('Random Dimension 1', fontsize=14) plt.show() print("% of variance explained by PCA: {:0.1f}% \ ".format( pca.explained_variance_ratio_[0]*100))
Tutorial Notebook Development.ipynb
Joao-M-Almeida/ML-Tutorial
mit
<br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> Model complexity and overfitting <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br>
# Adapted from: http://scikit-learn.org/stable/auto_examples/linear_model/plot_polynomial_interpolation.html # Author: Mathieu Blondel # Jake Vanderplas # License: BSD 3 clause def f(x, noise=False): """ function to approximate by polynomial interpolation""" if(noise): return np.sin(x) + np.random.randn(x.shape[0])/4 return np.sin(x) space_size = 6 # generate points used to plot x_plot = np.linspace(-space_size, space_size, 100) # generate points and keep a subset of them x = np.linspace(-space_size, space_size, 100) rng = np.random.RandomState(42) rng.shuffle(x) x = np.sort(x[:10]) y = f(x, True) # create matrix versions of these arrays X = x[:, np.newaxis] X_plot = x_plot[:, np.newaxis] colors = ['teal', 'yellowgreen', 'gold', 'blue'] lw = 2 fig = plt.figure(figsize=(12,12)) for count, degree in enumerate([1, 3, 6, 10]): ax = fig.add_subplot(2, 2, count+1) ax.plot(x_plot, f(x_plot), color='cornflowerblue', linewidth=lw, label="ground truth") ax.scatter(x, y, color='navy', s=30, marker='o', label="training points") model = make_pipeline(PolynomialFeatures(degree), Ridge()) model.fit(X, y) y_plot = model.predict(X_plot) ax.plot(x_plot, y_plot, color=colors[count], linewidth=lw, label="degree %d" % degree) ax.legend(loc='lower left') ax.set_ylim((-5, 5)) plt.show()
Tutorial Notebook Development.ipynb
Joao-M-Almeida/ML-Tutorial
mit
<br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> Resources: Online Courses: Machine Learning by Andrew Ng Books: Master Algorithm Pattern Recognition and Machine Learning Articles: A few useful things to know about Machine Learning Tutorials: Kaggle: Predicting Survival rate on the Titanic Scikit-Learn Tutorials <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br><br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> Unsupervised Learning ### Clustering: K -means TODO: Explain with toy data set, add video / GIF? Clustering Scotch: <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> Model Complexity and Overfitting <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br>
random_data = np.random.randn(100, 2) random_labels = np.random.randint(0,2,100) fig = plt.figure(figsize=(8,8)) plt.scatter(random_data[:, 0], random_data[:, 1], c=random_labels, cmap=cmap_bold) plt.xlabel('Random Dimension 1', fontsize=14) plt.ylabel('Random Dimension 2', fontsize=14) plt.show()
Tutorial Notebook Development.ipynb
Joao-M-Almeida/ML-Tutorial
mit
K-Nearest Neighbors Classifier
clf = KNeighborsClassifier(n_neighbors=1) clf = KNeighborsClassifier(n_neighbors=10) clf.fit(random_data, random_labels) print("Accuracy: {:0.3f}%".format( clf.score(random_data, random_labels)*100)) (xx, yy, Z) = Utils.predict_mesh(random_data, clf, h=0.01) fig = plt.figure(figsize=(8,8)) plt.xlabel('Random Dimension 1', fontsize=14) plt.ylabel('Random Dimension 2', fontsize=14) plt.pcolormesh(xx, yy, Z, cmap=cmap_light) plt.scatter(random_data[:, 0], random_data[:, 1], c=random_labels, cmap=cmap_bold) plt.show() random_labels = np.concatenate([np.ones((50,)), np.zeros((50,))]) random_data = np.concatenate([ np.add(np.multiply(np.random.randn(50, 2), np.array([0.7, 1.5])), np.array([3, 1])), np.multiply(np.random.randn(50, 2), np.array([0.5, 3])) ]) fig = plt.figure(figsize=(8, 8)) plt.scatter(random_data[:, 0], random_data[:, 1], c=random_labels, cmap=cmap_bold) plt.xlim((-4, 8)) plt.ylim((-6, 6)) plt.xlabel('Random Dimension 1', fontsize=14) plt.ylabel('Random Dimension 2', fontsize=14) plt.show()
Tutorial Notebook Development.ipynb
Joao-M-Almeida/ML-Tutorial
mit
There is so much more This can not even be considered scraping the surface. Go ahead and experiment it's a very interesting field, and there are tons of information and places to learn from!
whisky_data = pd.read_csv( filepath_or_buffer="Meta-Critic Whisky Database – Selfbuilts Whisky Analysis.csv") whisky_data.describe() whisky_data.head()
Tutorial Notebook Development.ipynb
Joao-M-Almeida/ML-Tutorial
mit
We want to plot the distribution of the mutations along the chromosomes, so, we first read the positions of the mutations (read from a random sample of 100,000 mutations)
from collections import defaultdict from ICGC_data_parser import SSM_Reader distribution = defaultdict(list) for record in SSM_Reader(filename='data/ssm_sample.vcf'): # Associate CHROMOSOME -> [MUTATION POSITIONS] distribution[record.CHROM].append(record.POS)
mutations_distribution_chroms.ipynb
Ad115/ICGC-data-parser
mit
We want to add information of the positions of the centromeric regions and the chromosome boundaries. We read this from the table data/chromosome-data.tsv
from collections import namedtuple # Create a custom class whose objects # hold information of a chromosome Chromosome = namedtuple('Chromosome', ['length', 'centromere_start', 'centromere_end']) import pandas as pd # Open the file with the information of the centromeric regions all_data = pd.read_table('data/chromosome-data.tsv', delimiter='\t') # Filter for human data human_data = all_data[ all_data['species'] == 'Homo sapiens' ] chromosomes = {} for _, record in human_data.iterrows(): chrom = record['chromosome'] length = record['chromosome length (bp)'] c_start = record['centromeric region start'] c_end = record['centromeric region end'] chromosomes[chrom] = Chromosome(length, c_start, c_end)
mutations_distribution_chroms.ipynb
Ad115/ICGC-data-parser
mit
To ensure the chromosomes are plotted in the correct order, we provide a list that defines that order
chrom_names = [str(i+1) for i in range(22)] + ['X', 'Y', 'MT']
mutations_distribution_chroms.ipynb
Ad115/ICGC-data-parser
mit
Finally, we can plot the mutations
for chrom in chrom_names: fig, ax = plt.subplots(figsize=(8, 2)) # Main plot ax.hist(distribution[chrom], bins=300) ax.set(title=f'Chromosome {chrom}') if chrom in chromosomes: # Fetch data on chromosome # length and centromere positions chrom_data = chromosomes[chrom] # Chromosome boundaries ax.axvline(chrom_data.length, ls='--', color='purple') ax.axvline(0, ls='--', color='purple') # Chromosome centromeres ax.axvline(chrom_data.centromere_end, ls=':', color='purple') ax.axvline(chrom_data.centromere_start, ls=':', color='purple') plt.show() chrom = '10' fig, ax = plt.subplots(figsize=(13, 3)) # Main plot ax.hist(distribution[chrom], bins=300) ax.set(title=f'Chromosome {chrom}') if chrom in chromosomes: # Fetch data on chromosome # length and centromere positions chrom_data = chromosomes[chrom] # Chromosome boundaries ax.axvline(chrom_data.length, ls='--', color='purple') ax.axvline(0, ls='--', color='purple') # Chromosome centromeres ax.axvline(chrom_data.centromere_end, ls=':', color='purple') ax.axvline(chrom_data.centromere_start, ls=':', color='purple') plt.savefig('chromosome-mutations.png') plt.show()
mutations_distribution_chroms.ipynb
Ad115/ICGC-data-parser
mit
Esperanto
import nltk print( nltk.corpus.udhr.raw('Esperanto-UTF8'))
Jupyter/languages_and_corpus.ipynb
Linguistics-DTU/DTU_8th_Sem_Project
gpl-3.0
Here, Give a brief overview of Esperanto Similarly we chose the UDHR in all the languages ( 22 in number ) shown in the following world map.
import os os.getcwd() from IPython.display import Image from IPython.core.display import HTML PATH = "C:\\Users\\user\\Desktop\\Language Space and Mind" Image(filename = PATH + "\\canvas.png", width=1000, height=500) list_of_languages = [ ['English', ['English-Latin1'] ] ,['Esperanto', ['Esperanto-UTF8']], ['German', ['German_Deutsch-Latin1']] ,['French', ['French_Francais-Latin1']] ,['Russian', ['Russian-UTF8','Russian_Russky-Cyrillic','Russian_Russky-UTF8']] ,['Farsi', ['Farsi_Persian-UTF8', 'Farsi_Persian-v2-UTF8']] ,['Finnish',['Finnish_Suomi-Latin1']] ,['Hungarian',['Hungarian_Magyar-Latin1','Hungarian_Magyar-Latin2','Hungarian_Magyar-UTF8']] ,['Turkish', ['Turkish_Turkce-Turkish','Turkish_Turkce-UTF8']] ,['Mongolian', ['Mongolian_Khalkha-Cyrillic','Mongolian_Khalkha-UTF8']] ,['Chinese',['Chinese_Mandarin-GB2312']] ,['Japanese',['Japanese_Nihongo-EUC','Japanese_Nihongo-SJIS','Japanese_Nihongo-UTF8']] ,['Korean',['Korean_Hankuko-UTF8']] ,['Hebrew',['Hebrew_Ivrit-Hebrew','Hebrew_Ivrit-UTF8']] ,['Hindi',['Hindi-UTF8','Hindi_web-UTF8']] ,['Kazakh',['Kazakh-Cyrillic','Kazakh-UTF8']] ,['Swedish',['Swedish_Svenska-Latin1']] ,['Icelandic' ,['Icelandic_Yslenska-Latin1']] ,['Sanskrit ' ,['Sanskrit-UTF8']] ,['Latin',['Latin_Latina-Latin1', 'Latin_Latina-v2-Latin1']] ,['Greek', ['Greek_Ellinika-Greek', 'Greek_Ellinika-UTF8']] ,['Swahili', ['Swaheli-Latin1','Swahili_Kiswahili-Latin1']] ]
Jupyter/languages_and_corpus.ipynb
Linguistics-DTU/DTU_8th_Sem_Project
gpl-3.0
Print the UDHR from each language
import nltk for i in range(len(list_of_languages)): print('\x1b[1;34m'+list_of_languages[i][0] +'\x1b[0m') print("\n\n") print(nltk.corpus.udhr.raw(list_of_languages[i][1])) print("\n\n\n\n\n\n\n")
Jupyter/languages_and_corpus.ipynb
Linguistics-DTU/DTU_8th_Sem_Project
gpl-3.0
The Parts of Speech Tagging in these languages
import nltk for i in range(len(list_of_languages)): print(list_of_languages[i][0]) words = nltk.pos_tag(nltk.corpus.udhr.words(list_of_languages[i][1][0])) words_types = [] for i in range(len(words)): words_types.append(words[i][1]) print(set(words_types)) print("\n\n\n\n\n\n\n")
Jupyter/languages_and_corpus.ipynb
Linguistics-DTU/DTU_8th_Sem_Project
gpl-3.0
Now we plot the graph as per the number of distinct elements in the POS set
import nltk ## now count the POS in each language num_pos = [] for i in range(len(list_of_languages)): print(list_of_languages[i][0]) words = nltk.pos_tag(nltk.corpus.udhr.words(list_of_languages[i][1][0])) words_types = [] for i in range(len(words)): words_types.append(words[i][1]) print(set(words_types)) num_pos.append(len(set(words_types))) print("\n\n\n\n\n\n\n") num_pos len(num_pos) len(list_of_languages) len(words_types) len(set(words_types)) np.arange(len(set(words_types)))
Jupyter/languages_and_corpus.ipynb
Linguistics-DTU/DTU_8th_Sem_Project
gpl-3.0
Plot styling
import numpy as np import matplotlib.pyplot as plt plt.style.use('bmh') %matplotlib inline fig_size = plt.rcParams["figure.figsize"] fig_size[0] = 14 fig_size[1] = 9 plt.rcParams["figure.figsize"] = fig_size lang_names=[] for i in range(len(list_of_languages)): lang_names.append((list_of_languages[i][0]))
Jupyter/languages_and_corpus.ipynb
Linguistics-DTU/DTU_8th_Sem_Project
gpl-3.0
Here we plot the list of POS per language.
import matplotlib.pyplot as plt #plt.rcdefaults() import numpy as np plt.style.use('bmh') # Example data #people = set(words_types) x_pos = np.arange(len(list_of_languages)) plt.bar(x_pos, num_pos, color='r', align='center', alpha=0.4) plt.xticks(x_pos,lang_names , rotation=45 ) #plt.xlabel('Performance') #plt.title('How fast do you want to go today?') plt.show()
Jupyter/languages_and_corpus.ipynb
Linguistics-DTU/DTU_8th_Sem_Project
gpl-3.0
Clearly, the POS taggers in the basic library are not all of the same quality.
import matplotlib.pyplot as plt #plt.rcdefaults() import numpy as np plt.style.use('bmh') # Example data #people = set(words_types) x_pos = np.arange(len(list_of_languages)) barlist= plt.bar(x_pos, num_pos, color='r', align='center', alpha=0.4) for i in range(len(barlist)): if barlist[i].get_height() >=15: barlist[i].set_color('b') plt.xticks(x_pos,lang_names , rotation=45 ) #plt.xlabel('Performance') #plt.title('How fast do you want to go today?') plt.show()
Jupyter/languages_and_corpus.ipynb
Linguistics-DTU/DTU_8th_Sem_Project
gpl-3.0
For moving towards a better application of Category Theory onto the Esperanto language, we used a pre-tagged corpus for this purpose. Here is a sample of the pre-tagged data. We relied upon the regular structure of the language to form the requisite Regular-Expressions for this purpose.
text = ''' Longe/RB vivadis/VBD en/IN paco/NNS tiu/DT gento/NNS trankvila,/JJ de/IN Kristanismo/NNS netusita/JJ gis/IN dek-tria/dek-tria jarcento./NNS De/IN la/DT cetera/JJ mondo/NNS forkasita/JJ per/IN marcoj/NNP kaj/CC densaj/JJ arbaregoj,/NNP kie/RB kuras/VBP gis/IN nun/VB sovagaj/JJ urbovoj,/NNP la/DT popolo/NNS dauris/VBD adori/ii la/DT fortojn/NNP de/IN la/DT naturo/NNS sub/IN gigantaj/JJ kverkoj,/NNP vivanta/JJ templo/NNS de/IN la/DT dioj./NNP Tie/RB tamen/RB ekbatalis/VBD okcidenta/JJ volo/NNS kun/IN orienta/JJ pacienco./NNS En/IN la/DT mezepoko/NNS teutonaj/JJ kavaliroj/NNP tiun/DT landon/NNS almilitis,/VBD polaj/JJ nobeloj/NNP gin/PRP ligis/VBD al/IN sia/PRP$ stato,/NNS moskova/JJ caro/NNS gin/PRP atakis./VBD Dume/RB alkuradis/VBD el/IN tuta/JJ mondo/NNS persekutataj/JJ Hebreoj/NNP por/IN starigi/ii manlaboron/NNS kaj/CC komercon/NNS lau/IN invito/NNS rega./JJ Tiel/RB alia/JJ gento/NNS tre/RB maljuna/JJ trovis/VBD tie/RB novan/JJ Palestinon/NNS kaj/CC fondis/VBD urbojn/NNP au/CC plenigis/VBD ilin./PRP''' [nltk.tag.str2tuple(t) for t in text.split()]
Jupyter/languages_and_corpus.ipynb
Linguistics-DTU/DTU_8th_Sem_Project
gpl-3.0
Standardizing a SMILES string The standardize_smiles function provides a quick and easy way to get the standardized version of a given SMILES string:
from molvs import standardize_smiles standardize_smiles('C[n+]1c([N-](C))cccc1')
examples/standardization.ipynb
mcs07/MolVS
mit
While this is convenient for one-off cases, it's inefficient when dealing with multiple molecules and doesn't allow any customization of the standardization process. The Standardizer class The Standardizer class provides flexibility to specify custom standardization stages and efficiently standardize multiple molecules.
from rdkit import Chem import molvs from molvs import Standardizer mol = Chem.MolFromSmiles('[Na]OC(=O)c1ccc(C[S+2]([O-])([O-]))cc1') mol s = Standardizer() smol = s.standardize(mol) smol Chem.MolToSmiles(smol)
examples/standardization.ipynb
mcs07/MolVS
mit
The Standardizer class takes a number of initialization parameters to customize its behaviour:
from molvs.normalize import Normalization norms = ( Normalization('Nitro to N+(O-)=O', '[*:1][N,P,As,Sb:2](=[O,S,Se,Te:3])=[O,S,Se,Te:4]>>[*:1][*+1:2]([*-1:3])=[*:4]'), Normalization('Pyridine oxide to n+O-', '[n:1]=[O:2]>>[n+:1][O-:2]'), ) my_s = Standardizer(normalizations=norms) smol = my_s.standardize(mol) smol
examples/standardization.ipynb
mcs07/MolVS
mit
Notice that the sulfone group wasn't normalized in this case, because when initializing the Standardizer we only specified two Normalizations. The default list of normalizations is molvs.normalize.NORMALIZATIONS. It is possible to resuse a Standardizer instance on many molecules once it has been initialized with some parameters:
my_s.standardize(Chem.MolFromSmiles('C1=C(C=C(C(=C1)O)C(=O)[O-])[S](O)(=O)=O.[Na+]')) my_s.standardize(Chem.MolFromSmiles('[Ag]OC(=O)O[Ag]'))
examples/standardization.ipynb
mcs07/MolVS
mit
In this table, there is one row for every transaction and a transaction_time column that specifies when the transaction took place. This means that transaction_time is the time index because it indicates when the information in each row became known and available for feature calculations. For now, ignore the _ft_last_time column. That is a featuretools-generated column that will be discussed later on. However, not every datetime column is a time index. Consider the customers dataframe:
es['customers']
docs/source/getting_started/handling_time.ipynb
Featuretools/featuretools
bsd-3-clause
Here, we have two time columns, join_date and birthday. While either column might be useful for making features, the join_date should be used as the time index because it indicates when that customer first became available in the dataset. What is the Cutoff Time? The cutoff_time specifies the last point in time that a row’s data can be used for a feature calculation. Any data after this point in time will be filtered out before calculating features. For example, let's consider a dataset of timestamped customer transactions, where we want to predict whether customers 1, 2 and 3 will spend $500 between 04:00 on January 1 and the end of the day. When building features for this prediction problem, we need to ensure that no data after 04:00 is used in our calculations. <img src="../_static/images/retail_ct.png" width="400" align="center" alt="retail cutoff time diagram">
fm, features = ft.dfs(entityset=es, target_dataframe_name='customers', cutoff_time=pd.Timestamp("2014-1-1 04:00"), instance_ids=[1,2,3], cutoff_time_in_index=True) fm
docs/source/getting_started/handling_time.ipynb
Featuretools/featuretools
bsd-3-clause
Even though the entityset contains the complete transaction history for each customer, only data with a time index up to and including the cutoff time was used to calculate the features above. Using a Cutoff Time DataFrame Oftentimes, the training examples for machine learning will come from different points in time. To specify a unique cutoff time for each row of the resulting feature matrix, we can pass a dataframe which includes one column for the instance id and another column for the corresponding cutoff time. These columns can be in any order, but they must be named properly. The column with the instance ids must either be named instance_id or have the same name as the target dataframe index. The column with the cutoff time values must either be named time or have the same name as the target dataframe time_index. The column names for the instance ids and the cutoff time values should be unambiguous. Passing a dataframe that contains both a column with the same name as the target dataframe index and a column named instance_id will result in an error. Similarly, if the cutoff time dataframe contains both a column with the same name as the target dataframe time_index and a column named time an error will be raised.
cutoff_times = pd.DataFrame() cutoff_times['customer_id'] = [1, 2, 3, 1] cutoff_times['time'] = pd.to_datetime(['2014-1-1 04:00', '2014-1-1 05:00', '2014-1-1 06:00', '2014-1-1 08:00']) cutoff_times['label'] = [True, True, False, True] cutoff_times fm, features = ft.dfs(entityset=es, target_dataframe_name='customers', cutoff_time=cutoff_times, cutoff_time_in_index=True) fm
docs/source/getting_started/handling_time.ipynb
Featuretools/featuretools
bsd-3-clause
We can now see that every row of the feature matrix is calculated at the corresponding time in the cutoff time dataframe. Because we calculate each row at a different time, it is possible to have a repeat customer. In this case, we calculated the feature vector for customer 1 at both 04:00 and 08:00. Training Window By default, all data up to and including the cutoff time is used. We can restrict the amount of historical data that is selected for calculations using a "training window." Here's an example of using a two hour training window:
window_fm, window_features = ft.dfs(entityset=es, target_dataframe_name="customers", cutoff_time=cutoff_times, cutoff_time_in_index=True, training_window="2 hour") window_fm
docs/source/getting_started/handling_time.ipynb
Featuretools/featuretools
bsd-3-clause
We can see that that the counts for the same feature are lower after we shorten the training window:
fm[["COUNT(transactions)"]] window_fm[["COUNT(transactions)"]]
docs/source/getting_started/handling_time.ipynb
Featuretools/featuretools
bsd-3-clause
Setting a Last Time Index The training window in Featuretools limits the amount of past data that can be used while calculating a particular feature vector. A row in the dataframe is filtered out if the value of its time index is either before or after the training window. This works for dataframes where a row occurs at a single point in time. However, a row can sometimes exist for a duration. For example, a customer's session has multiple transactions which can happen at different points in time. If we are trying to count the number of sessions a user has in a given time period, we often want to count all the sessions that had any transaction during the training window. To accomplish this, we need to not only know when a session starts, but also when it ends. The last time that an instance appears in the data is stored in the _ft_last_time column on the dataframe. We can compare the time index and the last time index of the sessions dataframe above:
last_time_index_col = es['sessions'].ww.metadata.get('last_time_index') es['sessions'][['session_start', last_time_index_col]].head()
docs/source/getting_started/handling_time.ipynb
Featuretools/featuretools
bsd-3-clause
Featuretools can automatically add last time indexes to every DataFrame in an Entityset by running EntitySet.add_last_time_indexes(). When using a training window, if a last_time_index has been set, Featuretools will check to see if the last_time_index is after the start of the training window. That, combined with the cutoff time, allows DFS to discover which data is relevant for a given training window. Excluding data at cutoff times Setting include_cutoff_time to False also impacts how data at the edges of training windows are included or excluded. Take this slice of data as an example:
df = es['transactions'] df[df["session_id"] == 1].head()
docs/source/getting_started/handling_time.ipynb
Featuretools/featuretools
bsd-3-clause
Looking at the data, transactions occur every 65 seconds. To check how include_cutoff_time effects training windows, we can calculate features at the time of a transaction while using a 65 second training window. This creates a training window with a transaction at both endpoints of the window. For this example, we'll find the sum of all transactions for session id 1 that are in the training window.
from featuretools.primitives import Sum sum_log = ft.Feature( es['transactions'].ww['amount'], parent_dataframe_name='sessions', primitive=Sum, ) cutoff_time = pd.DataFrame({ 'session_id': [1], 'time': ['2014-01-01 00:04:20'], }).astype({'time': 'datetime64[ns]'})
docs/source/getting_started/handling_time.ipynb
Featuretools/featuretools
bsd-3-clause
With include_cutoff_time=True, the oldest point in the training window (2014-01-01 00:03:15) is excluded and the cutoff time point is included. This means only transaction 371 is in the training window, so the sum of all transaction amounts is 31.54
# Case1. include_cutoff_time = True actual = ft.calculate_feature_matrix( features=[sum_log], entityset=es, cutoff_time=cutoff_time, cutoff_time_in_index=True, training_window='65 seconds', include_cutoff_time=True, ) actual
docs/source/getting_started/handling_time.ipynb
Featuretools/featuretools
bsd-3-clause
Whereas with include_cutoff_time=False, the oldest point in the window is included and the cutoff time point is excluded. So in this case transaction 116 is included and transaction 371 is exluded, and the sum is 78.92
# Case2. include_cutoff_time = False actual = ft.calculate_feature_matrix( features=[sum_log], entityset=es, cutoff_time=cutoff_time, cutoff_time_in_index=True, training_window='65 seconds', include_cutoff_time=False, ) actual
docs/source/getting_started/handling_time.ipynb
Featuretools/featuretools
bsd-3-clause
Approximating Features by Rounding Cutoff Times For each unique cutoff time, Featuretools must perform operations to select the data that’s valid for computations. If there are a large number of unique cutoff times relative to the number of instances for which we are calculating features, the time spent filtering data can add up. By reducing the number of unique cutoff times, we minimize the overhead from searching for and extracting data for feature calculations. One way to decrease the number of unique cutoff times is to round cutoff times to an earlier point in time. An earlier cutoff time is always valid for predictive modeling — it just means we’re not using some of the data we could potentially use while calculating that feature. So, we gain computational speed by losing a small amount of information. To understand when an approximation is useful, consider calculating features for a model to predict fraudulent credit card transactions. In this case, an important feature might be, "the average transaction amount for this card in the past". While this value can change every time there is a new transaction, updating it less frequently might not impact accuracy. fm = ft.calculate_feature_matrix(features=features, entityset=es_transactions, cutoff_time=ct_transactions, approximate="1 day") In this computation, features that can be approximated will be calculated at 1 day intervals, while features that cannot be approximated (e.g "where did this transaction occur?") will be calculated at the exact cutoff time. Secondary Time Index It is sometimes the case that information in a dataset is updated or added after a row has been created. This means that certain columns may actually become known after the time index for a row. Rather than drop those columns to avoid leaking information, we can create a secondary time index to indicate when those columns become known.
import urllib.request as urllib2 opener = urllib2.build_opener() opener.addheaders = [('Testing', 'True')] urllib2.install_opener(opener) es_flight = ft.demo.load_flight(nrows=100) es_flight es_flight['trip_logs'].head(3)
docs/source/getting_started/handling_time.ipynb
Featuretools/featuretools
bsd-3-clause
For every trip log, the time index is date_scheduled, which is when the airline decided on the scheduled departure and arrival times, as well as what route will be flown. We don't know the rest of the information about the actual departure/arrival times and the details of any delay at this time. However, it is possible to know everything about how a trip went after it has arrived, so we can use that information at any time after the flight lands. Using a secondary time index, we can indicate to Featuretools which columns in our flight logs are known at the time the flight is scheduled, plus which are known at the time the flight lands. <img src="../_static/images/flight_ti_2.png" width="400" align="center" alt="flight secondary time index diagram"> In Featuretools, when adding the dataframe to the EntitySet, we set the secondary time index to be the arrival time like this: es = ft.EntitySet('Flight Data') arr_time_columns = ['arr_delay', 'dep_delay', 'carrier_delay', 'weather_delay', 'national_airspace_delay', 'security_delay', 'late_aircraft_delay', 'canceled', 'diverted', 'taxi_in', 'taxi_out', 'air_time', 'dep_time'] es.add_dataframe( dataframe_name='trip_logs', dataframe=data, index='trip_log_id', make_index=True, time_index='date_scheduled', secondary_time_index={'arr_time': arr_time_columns}) By setting a secondary time index, we can still use the delay information from a row, but only when it becomes known. Flight Predictions Let's make some features at varying times using the flight example described above. Trip 14 is a flight from CLT to PHX on January 31, 2017 and trip 92 is a flight from PIT to DFW on January 1. We can set any cutoff time before the flight is scheduled to depart, emulating how we would make the prediction at that point in time. We set two cutoff times for trip 14 at two different times: one which is more than a month before the flight and another which is only 5 days before. For trip 92, we'll only set one cutoff time, three days before it is scheduled to leave. <img src="../_static/images/flight_ct.png" width="500" align="center" alt="flight cutoff time diagram"> Our cutoff time dataframe looks like this:
ct_flight = pd.DataFrame() ct_flight['trip_log_id'] = [14, 14, 92] ct_flight['time'] = pd.to_datetime(['2016-12-28', '2017-1-25', '2016-12-28']) ct_flight['label'] = [True, True, False] ct_flight
docs/source/getting_started/handling_time.ipynb
Featuretools/featuretools
bsd-3-clause
Now, let's calculate the feature matrix:
fm, features = ft.dfs(entityset=es_flight, target_dataframe_name='trip_logs', cutoff_time=ct_flight, cutoff_time_in_index=True, agg_primitives=["max"], trans_primitives=["month"],) fm[['flights.origin', 'flights.dest', 'label', 'flights.MAX(trip_logs.arr_delay)', 'MONTH(scheduled_dep_time)']]
docs/source/getting_started/handling_time.ipynb
Featuretools/featuretools
bsd-3-clause
Let's understand the output: A row was made for every id-time pair in ct_flight, which is returned as the index of the feature matrix. The output was sorted by cutoff time. Because of the sorting, it's often helpful to pass in a label with the cutoff time dataframe so that it will remain sorted in the same fashion as the feature matrix. Any additional columns beyond id and cutoff_time will not be used for making features. The column flights.MAX(trip_logs.arr_delay) is not always defined. It can only have any real values when there are historical flights to aggregate. Notice that, for trip 14, there wasn't any historical data when we made the feature a month in advance, but there were flights to aggregate when we shortened it to 5 days. These are powerful features that are often excluded in manual processes because of how hard they are to make. Creating and Flattening a Feature Tensor This function can be paired with DFS to create and flatten a feature tensor rather than making multiple feature matrices at different delays. The function takes in the the following parameters: instance_ids (list, pd.Series, or np.ndarray): A list of instances. cutoffs (list, pd.Series, or np.ndarray): An associated list of cutoff times. window_size (str or pandas.DateOffset): The amount of time between each cutoff time in the created time series. start (datetime.datetime or pd.Timestamp): The first cutoff time in the created time series. num_windows (int): The number of cutoff times to create in the created time series. Only two of the three options window_size, start, and num_windows need to be specified to uniquely determine an equally-spaced set of cutoff times at which to compute each instance. If your cutoff times are the ones used above:
cutoff_times
docs/source/getting_started/handling_time.ipynb
Featuretools/featuretools
bsd-3-clause
Then passing in window_size='1h' and num_windows=2 makes one row an hour over the last two hours to produce the following new dataframe. The result can be directly passed into DFS to make features at the different time points.
temporal_cutoffs = ft.make_temporal_cutoffs(cutoff_times['customer_id'], cutoff_times['time'], window_size='1h', num_windows=2) temporal_cutoffs fm, features = ft.dfs(entityset=es, target_dataframe_name='customers', cutoff_time=temporal_cutoffs, cutoff_time_in_index=True) fm
docs/source/getting_started/handling_time.ipynb
Featuretools/featuretools
bsd-3-clause
<div class='alert alert-warning' style='width:600px; font-size:16px'> <h1>GLOBAL VARIABLE WARNING</h1> Here I download updated clinical data from the TCGA Data Portal. This is a secure site which uses HTTPS. I had to give it a path to my ca-cert for the download to work. Download a copy of a generic cacert.pem [here](http://curl.haxx.se/ca/cacert.pem). </div>
PATH_TO_CACERT = '/cellar/users/agross/cacert.pem'
Notebooks/get_all_MAFs.ipynb
theandygross/CancerData
mit
Download most recent files from MAF dashboard
out_path = OUT_PATH + '/MAFs_new_2/' if not os.path.isdir(out_path): os.makedirs(out_path) maf_dashboard = 'https://confluence.broadinstitute.org/display/GDAC/MAF+Dashboard' !curl --cacert $PATH_TO_CACERT $maf_dashboard -o tmp.html
Notebooks/get_all_MAFs.ipynb
theandygross/CancerData
mit
Use BeutifulSoup to parse out all of the links in the table
f = open('tmp.html', 'rb').read() soup = BeautifulSoup(f) r = [l.get('href') for l in soup.find_all('a') if l.get('href') != None and '.maf' in l.get('href')]
Notebooks/get_all_MAFs.ipynb
theandygross/CancerData
mit
Download all of the MAFs by following the links This takes a while, as I'm downloading all of the data. I read in the table first to count the number of comment lines and a second time to actuall load the data. Yes there is likely a more efficient way to do this, but I'm waiting on https://github.com/pydata/pandas/issues/2685
t = pd.read_table(f, nrows=10, sep='not_real_term', header=None, squeeze=True, engine='python') cols = ['Hugo_Symbol', 'NCBI_Build', 'Chromosome', 'Start_position', 'End_position', 'Strand', 'Reference_Allele', 'Tumor_Seq_Allele1', 'Tumor_Seq_Allele2', 'Tumor_Sample_Barcode', 'Protein_Change', 'Variant_Classification','Variant_Type'] maf = {} for f in r: try: t = pd.read_table(f, nrows=10, sep='not_real_term', header=None, squeeze=True, engine='python') skip = t.apply(lambda s: s.startswith('#')) skip = list(skip[skip==True].index) h = pd.read_table(f, header=0, index_col=None, skiprows=skip, engine='python', nrows=0) cc = list(h.columns.intersection(cols)) maf[f] = pd.read_table(f, header=0, index_col=None, skiprows=skip, engine='c', usecols=cc) except HTTPError: print f m2 = pd.concat(maf) m3 = m2.dropna(axis=1, how='all')
Notebooks/get_all_MAFs.ipynb
theandygross/CancerData
mit
Reduce MAF down to most usefull columns
m4 = m3[cols] m4 = m4.reset_index() #m4.index = map(lambda s: s.split('/')[-1], m4.index) m4 = m4.drop_duplicates(subset=['Hugo_Symbol','Tumor_Sample_Barcode','Start_position']) m4 = m4.reset_index() m4.to_csv(out_path + 'mega_maf.csv')
Notebooks/get_all_MAFs.ipynb
theandygross/CancerData
mit
Get gene by patient mutation count matrix and save
m5 = m4.ix[m4.Variant_Classification != 'Silent'] cc = m5.groupby(['Hugo_Symbol','Tumor_Sample_Barcode']).size() cc = cc.reset_index() cc.to_csv(out_path + 'meta.csv') cc.shape
Notebooks/get_all_MAFs.ipynb
theandygross/CancerData
mit
...and looks something like this in Western music notation: We can convert that into a sequence of bits, with each 1 representing an onset, and 0 representing a rest (similar to the way a sequencer works). Doing so yields this: [1 0 0 1 0 0 1 0] ...which we can conveniently store as a list in Python. Actually, this is a good time to start diving directly into code. First, let's import all the Python libraries we need:
%matplotlib inline import math # Standard library imports import IPython.display as ipd, librosa, librosa.display, numpy as np, matplotlib.pyplot as plt # External libraries import pardir; pardir.pardir() # Allow imports from parent directory import bjorklund # Fork of Brian House's implementation of Bjorklund's algorithm https://github.com/brianhouse/bjorklund import fibonaccistretch # Functions pertaining specifically to Fibonacci stretch; much of what we'll use here
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
We can listen to the pulses and steps together:
# Generate the clicks tresillo_pulse_clicks, tresillo_step_clicks = fibonaccistretch.generate_rhythm_clicks(tresillo_rhythm, tresillo_click_interval) tresillo_pulse_times, tresillo_step_times = fibonaccistretch.generate_rhythm_times(tresillo_rhythm, tresillo_click_interval) # Tresillo as an array print(tresillo_rhythm) # Tresillo audio, plotted plt.figure(figsize=(8, 2)) librosa.display.waveplot(tresillo_pulse_clicks + tresillo_step_clicks, sr=sr) plt.vlines(tresillo_pulse_times + 0.005, -1, 1, color="r") plt.vlines(tresillo_step_times + 0.005, -0.5, 0.5, color="r") # Tresillo as audio ipd.Audio(tresillo_pulse_clicks + tresillo_step_clicks, rate=44100)
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
You can follow along with the printed array and hear that every 1 corresponds to a pulse, and every 0 to a step. In addition, let's define pulse lengths as the number of steps that each pulse lasts:
tresillo_pulse_lengths = fibonaccistretch.calculate_pulse_lengths(tresillo_rhythm) print("Tresillo pulse lengths: {}".format(tresillo_pulse_lengths))
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
Note that the tresillo rhythm's pulse lengths all fall along the Fibonacci sequence. This allows us do some pretty fun things, as we'll see in a bit. But first let's take a step back. Part 2 - Fibonacci rhythms 2.1 Fibonacci numbers The Fibonacci sequence is a particular sequence in which each value is the sum of the two preceding values. We can define a function in Python that gives us the nth Fibonacci number:
fibonaccistretch.fibonacci??
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
And the first 20 numbers in the sequence are:
first_twenty_fibs = np.array([fibonaccistretch.fibonacci(n) for n in range(20)]) plt.figure(figsize=(16,1)) plt.scatter(first_twenty_fibs, np.zeros(20), c="r") plt.axis("off") print(first_twenty_fibs)
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
We can also use the golden ratio to find the index of a Fibonacci number:
fibonaccistretch.find_fibonacci_index?? fib_n = 21 fib_i = fibonaccistretch.find_fibonacci_index(fib_n) assert(fibonaccistretch.fibonacci(fib_i) == fib_n) print("{} is the {}th Fibonacci number".format(fib_n, fib_i))
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
We might classify it as a Fibonacci rhythm, since every one of its pulse lengths is a Fibonacci number. If we wanted to expand that rhythm along the Fibonacci sequence, what would that look like? An intuitive (and, as it turns out, musically satisfying) method would be to take every pulse length and simply replace it with the Fibonacci number that follows it. So in our example, the 3s become 5s, and the 2 becomes 3.
expanded_pulse_lengths = fibonaccistretch.fibonacci_expand_pulse_lengths(tresillo_pulse_lengths) print("Expanded tresillo pulse lengths: {}".format(expanded_pulse_lengths))
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
We'll also want to be able to contract rhythms along the Fibonacci sequence (i.e. choose numbers in decreasing order instead of increasing order), as well as specify how many Fibonacci numbers away we want to end up. We can generalize this expansion and contraction into a single function that can scale pulse lengths:
# Note that `scale_amount` determines the direction and magnitude of the scaling. # If `scale_amount` > 0, it corresponds to a rhythmic expansion. # If `scale_amount` < 0, it corresponds to a rhythmic contraction. # If `scale_amount` == 0, the original scale is maintained and no changes are made. print("Tresillo pulse lengths: {}".format(tresillo_pulse_lengths)) print("Tresillo pulse lengths expanded by 1: {}".format(fibonaccistretch.fibonacci_scale_pulse_lengths(tresillo_pulse_lengths, scale_amount=1))) print("Tresillo pulse lengths expanded by 2: {}".format(fibonaccistretch.fibonacci_scale_pulse_lengths(tresillo_pulse_lengths, scale_amount=2))) print("Tresillo pulse lengths contracted by 1: {}".format(fibonaccistretch.fibonacci_scale_pulse_lengths(tresillo_pulse_lengths, scale_amount=-1)))
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
Of course, once we have these scaled pulse lengths, we'll want to be able to convert them back into rhythms, in our original array format:
# Scale tresillo rhythm by a variety of factors and plot the results for scale_factor, color in [(0, "r"), (1, "g"), (2, "b"), (-1, "y")]: scaled_rhythm = fibonaccistretch.fibonacci_scale_rhythm(tresillo_rhythm, scale_factor) scaled_pulse_indices = np.array([p_i for p_i,x in enumerate(scaled_rhythm) if x > 0 ]) scaled_step_indices = np.array([s_i for s_i in range(len(scaled_rhythm))]) scaled_pulse_ys = np.empty(len(scaled_pulse_indices)) scaled_pulse_ys.fill(0) scaled_step_ys = np.empty(len(scaled_step_indices)) scaled_step_ys.fill(0) # plt.figure(figsize=(len([scaled_rhythm])*0.5, 1)) plt.figure(figsize=(8, 1)) if scale_factor > 0: plt.title("Tresillo rhythm expanded by {}: {}".format(abs(scale_factor), scaled_rhythm), loc="left") elif scale_factor < 0: plt.title("Tresillo rhythm contracted by {}: {}".format(abs(scale_factor), scaled_rhythm), loc="left") else: # scale_factor == 0, which means rhythm is unaltered plt.title("Tresillo rhythm: {}".format(scaled_rhythm), loc="left") # plt.scatter(scaled_pulse_indices, scaled_pulse_ys, c=color) # plt.scatter(scaled_step_indices, scaled_step_ys, c="k", alpha=0.5) # plt.grid(True) plt.vlines(scaled_pulse_indices, -1, 1, color=color) plt.vlines(scaled_step_indices, -0.5, 0.5, color=color, alpha=0.5) plt.xticks(np.arange(0, plt.xlim()[1], 1)) plt.yticks([]) # plt.xticks(np.linspace(0, 10, 41))
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
An important feature we want to extract from the audio is tempo (i.e. the time interval between steps). Let's estimate that using the librosa.beat.tempo method (which requires us to first detect onsets, or []):
tempo = fibonaccistretch.estimate_tempo(y, sr) print("Tempo (calculated): {}".format(tempo)) tempo = 93.0 # Hard-coded from prior knowledge print("Tempo (hard-coded): {}".format(tempo))
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
<div style="color:gray"> (We can see that the tempo we've estimated differs by approximately 1BPM from the tempo that we've hard-coded from prior knowledge. It's often the case that such automatic feature extraction tools and algorithms require a fair bit of fine-tuning, so we can improve our results by supplying some user-defined parameters, especially when using them out of the box like we are here. The variables `hop_length` and `tempo` are two such parameters in this case. However, the more parameters we define manually, the less flexible our overall system becomes, so it's a tradeoff between accuracy and robustness.) </div> 3.2 From tempo to beats From the tempo we can calculate the times of every beat in the song (assuming the tempo is consistent, which in this case it is):
beat_times = fibonaccistretch.calculate_beat_times(y, sr, tempo) print("First 10 beat times (in seconds): {}".format(beat_times[:10]))
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
Using beats_per_measure we can calculate the times for the start of each measure:
# Work in samples from here on beat_samples = librosa.time_to_samples(beat_times, sr=sr) measure_samples = fibonaccistretch.calculate_measure_samples(y, beat_samples, beats_per_measure) print("First 10 measure samples: {}".format(measure_samples[:10]))
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
With these markers in place, we can now overlay the tresillo rhythm onto each measure and listen to the result:
fibonaccistretch.overlay_rhythm_onto_audio(tresillo_rhythm, y, measure_samples, sr=sr)
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
The clicks for measures, pulses, and steps, overlap with each other at certain points. While you can hear this based on the fact that each click is at a different frequency, it can be hard to tell visually in the above figure. We can make this more apparent by plotting each set of clicks with a different color. In the below figure, each measure is denoted by a large <span style="color:red">red</span> line, each pulse by a medium <span style="color:green">green</span> line, and each step by a small <span style="color:blue">blue</span> line.
fibonaccistretch.overlay_rhythm_onto_audio(tresillo_rhythm, y, measure_samples, sr=sr, click_colors={"measure": "r", "pulse": "g", "step": "b"})
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
You can hear that the tresillo rhythm's pulses line up with the harmonic rhythm of "Human Nature"; generally, we want to pick rhythms and audio tracks that have at least some kind of musical relationship. (We could actually try to estimate rhythmic patterns based on onsets and tempo, but that's for another time.) Part 4 - Time-stretching audio Now that we've put the symbolic rhythm and source audio together, we're ready to begin manipulating the audio and doing some actual stretching! 4.1 Target rhythms First, we'll define the target rhythm that we want the audio to be mapped to:
original_rhythm = tresillo_rhythm target_rhythm = fibonaccistretch.fibonacci_scale_rhythm(original_rhythm, 1) # "Fibonacci scale" original rhythm by a factor of 1 print("Original rhythm: {}\n" "Target rhythm: {}".format(original_rhythm, target_rhythm))
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
4.2 Pulse ratios Given an original rhythm and target rhythm, we can compute their pulse ratios, that is, the ratio between each of their pulses:
pulse_ratios = fibonaccistretch.calculate_pulse_ratios(original_rhythm, target_rhythm) print("Pulse ratios: {}".format(pulse_ratios))
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
4.3 Modifying measures by time-stretching Since we're treating our symbolic rhythms as having the duration of one measure, it makes sense to start by modifying a single measure. Basically what we want to do is: for each pulse, get the audio chunk that maps to that pulse, and time-stretch it based on our calculated pulse ratios. Below is an implementation of just that. It's a bit long, but that's mostly due to having to define several properties to do with rhythm and audio. The core idea, of individually stretching the pulses, remains the same:
fibonaccistretch.modify_measure??
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
You'll notice that in the part where we choose stretch methods, there's a function called euclidean_stretch that we haven't defined. We'll get to that in just a second! For now, let's just keep that in the back of our heads, and not worry about it too much, so that we can hear what our modification method sounds like when applied to the first measure of "Human Nature":
first_measure_data = y[measure_samples[0]:measure_samples[1]] first_measure_modified = fibonaccistretch.modify_measure(first_measure_data, original_rhythm, target_rhythm, stretch_method="timestretch") ipd.Audio(first_measure_modified, rate=sr)
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
It doesn't sound like there's much difference between the stretched version and the original, does it? 4.4 Modifying an entire track by naively time-stretching each pulse To get a better sense, let's apply the modification to the entire audio track:
# Modify the track using naive time-stretch y_modified, measure_samples_modified = fibonaccistretch.modify_track(y, measure_samples, original_rhythm, target_rhythm, stretch_method="timestretch") plt.figure(figsize=(16,4)) librosa.display.waveplot(y_modified, sr=sr) ipd.Audio(y_modified, rate=sr)
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
Listening to the whole track, only perceptible difference is that the last two beats of each measure are slightly faster. If we look at the pulse ratios again:
pulse_ratios = fibonaccistretch.calculate_pulse_ratios(original_rhythm, target_rhythm) print(pulse_ratios)
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
... we can see that this makes sense, as we're time-stretching the first two pulses by the same amount, and then time-stretching the last pulse by a different amount. (Note that while we're expanding our original rhythm along the Fibonacci sequence, this actually corresponds to a contraction when time-stretching. This is because we want to maintain the original tempo, so we're trying to fit more steps into the same timespan.) 4.5 Overlaying target rhythm clicks We can get some more insight if we sonify the target rhythm's clicks and overlay it onto our modified track:
fibonaccistretch.overlay_rhythm_onto_audio(target_rhythm, y_modified, measure_samples, sr)
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
Looking at the first pulses of the original rhythm and target rhythm, we want to turn [1 0 0] into [1 0 0 0 0]. To accomplish this, we'll turn to the concept of Euclidean rhythms. 5.2 Generating Euclidean rhythms using Bjorklund's algorithm A Euclidean rhythm is a type of rhythm that can be generated based upon the Euclidean algorithm for calculating the greatest common divisor of two numbers.
fibonaccistretch.euclid?? gcd = fibonaccistretch.euclid(8, 12) print("Greatest common divisor of 8 and 12 is {}".format(gcd))
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
You might have noticed that this rhythm is exactly the same as the rhythm produced by contracting the tresillo rhythm along the Fibonacci sequence by a factor of 1:
print(fibonaccistretch.fibonacci_scale_rhythm(tresillo_rhythm, -1))
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
The resulting pulse ratios are:
print(fibonaccistretch.calculate_pulse_ratios(original_pulse_rhythm, target_pulse_rhythm))
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
... which doesn't intuitively look like it would produce something any different from what we tried before. However, we might perceive a greater difference because: a) we're working on a more granular temporal level (subdivisions of pulses as opposed to measures), and b) we're adjusting an equally-spaced rhythm (e.g. [1 1 1]) to one that's not necessarily equally-spaced (e.g. [1 0 1 0 1]) 5.4 The Euclidean stretch algorithm With all this in mind, we can now implement Euclidean stretch:
fibonaccistretch.euclidean_stretch??
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
Let's take a listen to how it sounds:
# Modify the track y_modified, measure_samples_modified = fibonaccistretch.modify_track(y, measure_samples, original_rhythm, target_rhythm, stretch_method="euclidean") plt.figure(figsize=(16,4)) librosa.display.waveplot(y_modified, sr=sr) ipd.Audio(y_modified, rate=sr)
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
Much better! With clicks:
fibonaccistretch.overlay_rhythm_onto_audio(target_rhythm, y_modified, measure_samples, sr)
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
As you can hear, the modified track's rhythm is in line with the clicks, and sounds noticeably different from the original song. This is a pretty good place to end up! Part 6 - Fibonacci stretch: implementation and examples 6.1 Implementation Here's an end-to-end implementation of Fibonacci stretch. A lot of the default parameters have been set to the ones we've been using in this notebook, although of course you can pass in your own:
fibonaccistretch.fibonacci_stretch_track??
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
Now we can simply feed the function a path to an audio file (as well as any parameters we want to customize). This is the exact method that's applied to the sneak peek at the final result up top. The only difference is that we use a 90-second excerpt rather than our original 30-second one:
# "Human Nature" stretched by a factor of 1 using default parameters fibonaccistretch.fibonacci_stretch_track("../data/humannature_90s.mp3", stretch_factor=1, tempo=93.0)
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
And indeed we get the exact same result. 6.2 Examples: customizing stretch factors Now that we have a function to easily stretch tracks, we can begin playing around with some of the parameters. Here's the 30-second "Human Nature" excerpt again, only this time it's stretched by a factor of 2 instead of 1:
# "Human Nature" stretched by a factor of 2 fibonaccistretch.fibonacci_stretch_track("../data/humannature_30s.mp3", tempo=93.0, stretch_factor=2, overlay_clicks=True)
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
As mentioned in part 2.2, we can contract rhythms as well using negative numbers as our stretch_factor. Let's try that with "Chan Chan" by the Buena Vista Social Club:
# "Chan Chan" stretched by a factor of -1 fibonaccistretch.fibonacci_stretch_track("../data/chanchan_30s.mp3", stretch_factor=-1, tempo=78.5)
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
(Note that although we do end up with a perceptible difference (the song now sounds like it's in 7/8), it should actually sound like it's in 5/8, since [1 0 0 1 0 0 1 0] is getting compressed to [1 0 1 0 1]. This is an implementation detail with the Euclidean stretch method that I need to fix.) 6.3 Examples: customizing original and target rhythms In order to get musically meaningful results we generally want to supply parameters that make musical sense with our input audio (although it can certainly be interesting to try with parameters that don't!). One of the parameters that makes the most difference in results is the rhythm sequence used to represent each measure. Here's Chance the Rapper's verse from DJ Khaled's "I'm the One", with a custom original_rhythm that matches the bassline of the song:
# "I'm the One" stretched by a factor of 1 fibonaccistretch.fibonacci_stretch_track("../data/imtheone_cropped_chance_60s.mp3", tempo=162, original_rhythm=np.array([1,0,0,0,0,1,0,0]), stretch_factor=1)
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
We can define both a custom target rhythm as well. In addition, neither original_rhythm nor target_rhythm have to be Fibonacci rhythms for the stretch algorithm to work (although with this implementation they do both have to have the same number of pulses). Let's try that out with the same verse, going from an original rhythm with 8 steps (i.e. in 4/4 meter) to a target rhythm with 10 steps (i.e. in 5/4 meter):
# "I'm the One" in 5/4 fibonaccistretch.fibonacci_stretch_track("../data/imtheone_cropped_chance_60s.mp3", tempo=162, original_rhythm=np.array([1,0,0,0,0,1,0,0]), target_rhythm=np.array([1,0,0,0,0,1,0,0,0,0]), overlay_clicks=True)
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
As another example, we can give a swing feel to the first movement of Mozart's "Eine kleine Nachtmusik" (K. 525), as performed by A Far Cry:
# "Eine kleine Nachtmusik" with a swing feel fibonaccistretch.fibonacci_stretch_track("../data/einekleinenachtmusik_30s.mp3", tempo=130, original_rhythm=np.array([1,0,1,1]), target_rhythm=np.array([1,0,0,1,0,1]))
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
It works pretty decently until around 0:09, at which point the assumption of a metronomically consistent tempo breaks down. (This is one of the biggest weaknesses with the current implementation, and is something I definitely hope to work on in the future.) Let's also hear what "Chan Chan" sounds like in 5/4:
# "Chan Chan" in 5/4 fibonaccistretch.fibonacci_stretch_track("../data/chanchan_30s.mp3", tempo=78.5, original_rhythm=np.array([1,0,0,1,0,0,0,0]), target_rhythm=np.array([1,0,0,0,0,1,0,0,0,0])) # Also interesting to try with [1,0,1]
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
6.4 Examples: customizing input beats per measure We can also work with source audio in other meters. For example, Frank Ocean's "Pink + White" is in 6/8. Here I've stretched it into 4/4 using the rhythm of the bassline, but you can uncomment the other supplied parameters (or supply your own!) to hear how they sound as well:
# "Pink + White" stretched by a factor of 1 fibonaccistretch.fibonacci_stretch_track("../data/pinkandwhite_30s.mp3", beats_per_measure=6, tempo=160, # 6/8 to 4/4 using bassline rhythm original_rhythm=np.array([1,1,1,1,0,0]), target_rhythm=np.array([1,1,1,0,1,0,0,0]), # 6/8 to 4/4 using half notes # original_rhythm=np.array([1,0,0,1,0,0]), # target_rhythm=np.array([1,0,0,0,1,0,0,0]), # 6/8 to 10/8 (5/4) using Fibonacci stretch factor of 1 # original_rhythm=np.array([1,0,0,1,0,0]), # stretch_factor=1, overlay_clicks=True)
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
Exercícios - Loops e Condiconais
# Exercício 1 - Crie uma estrutura que pergunte ao usuário qual o dia da semana. Se o dia for igual a Domingo ou # igual a sábado, imprima na tela "Hoje é dia de descanso", caso contrário imprima na tela "Você precisa trabalhar!" # Exercício 2 - Crie uma lista de 5 frutas e verifique se a fruta 'Morango' faz parte da lista # Exercício 3 - Crie uma tupla de 4 elementos, multiplique cada elemento da tupla por 2 e guarde os resultados em uma # lista # Exercício 4 - Crie uma sequência de números pares entre 100 e 150 e imprima na tela # Exercício 5 - Crie uma variável chamada temperatura e atribua o valor 40. Enquanto temperatura for maior que 35, # imprima as temperaturas na tela # Exercício 6 - Crie uma variável chamada contador = 0. Enquanto counter for menor que 100, imprima os valores na tela, # mas quando for encontrado o valor 23, interrompa a execução do programa # Exercício 7 - Crie uma lista vazia e uma variável com valor 4. Enquanto o valor da variável for menor ou igual a 20, # adicione à lista, apenas os valores pares e imprima a lista # Exercício 8 - Transforme o resultado desta função range em uma lista: range(5, 45, 2) nums = range(5, 45, 2) # Exercício 9 - Faça a correção dos erros no código abaixo e execute o programa. Dica: são 3 erros. temperatura = float(input('Qual a temperatura? ')) if temperatura > 30 print('Vista roupas leves.') else print('Busque seus casacos.') # Exercício 10 - Faça um programa que conte quantas vezes a letra "r" aparece na frase abaixo. Use um placeholder na # sua instrução de impressão # “É melhor, muito melhor, contentar-se com a realidade; se ela não é tão brilhante como os sonhos, tem pelo menos a # vantagem de existir.” (Machado de Assis) frase = "É melhor, muito melhor, contentar-se com a realidade; se ela não é tão brilhante como os sonhos, tem pelo menos a vantagem de existir."
Cap03/Notebooks/DSA-Python-Cap03-Exercicios-Loops-Condiconais.ipynb
dsacademybr/PythonFundamentos
gpl-3.0
Regression With a Single Feature Using a single feature to make a numerical prediction TO DO - nothing for the moment
# Share functions used in multiple notebooks %run Shared-Functions.ipynb
Notebooks/Regression-with-a-Single-Feature.ipynb
jsub10/MLCourse
mit
ACKNOWLEDGEMENT The dataset used in this notebook is from Andrew Ng's course on Machine Learning on Coursera. Linear regression has been in use for hundreds of years. What place does it have in the shiny (relatively) new field of machine learning? It's the same end result you've learned in statsistics classes, but in a way much simpler. The method for getting there is via the steps of machine learning outlined in the Nuts and Bolts notebook. We'll go through these steps again in this notebook. The Business Problem: Predicting Restaurant Profits You're the CEO of a restaurant franchise. Your restaurants operate in a number of small towns. You're thinking of how to grow the business. Where's the best place to put the next restaurant? For each restaurant your company owns and operates, you have the population (in 10,000s) of the town the restaurant is located in and the most recent year's profit (in \$10,000s) generated by the restaurant. You'd like to use this data to make some profit predictions and use these to prioritize locations for new restaurants. Let's have a look at this data. Load the Data
# Load up the packages to investigate the data import numpy as np import pandas as pd import matplotlib.pyplot as plt import matplotlib.cm as cm %matplotlib inline import seaborn as sns import os # OS-independent way to navigate the file system # Data directory is one directory up in relation to directory of this notebook data_dir_root = os.path.normpath(os.getcwd() + os.sep + os.pardir) # Where the file is file_url = data_dir_root + os.sep + "Data" + os.sep + "food-truck-profits.txt" # Load the data into a dataframe data = pd.read_csv(file_url, header=None, names=['Population', 'Profit']) # Quick check on what we have data.shape
Notebooks/Regression-with-a-Single-Feature.ipynb
jsub10/MLCourse
mit
This means that the dataset has 97 rows and 2 columns. Let's see what the data looks like. The first few rows of our data look like this:
data.head()
Notebooks/Regression-with-a-Single-Feature.ipynb
jsub10/MLCourse
mit
Step 1: Visualize the Data
# Visualize the data data.plot.scatter(x='Population', y='Profit', figsize=(8,6));
Notebooks/Regression-with-a-Single-Feature.ipynb
jsub10/MLCourse
mit