markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
<h3>CDF and PS Plots illustrating Albedo</h3>
# Run function and plot fig = plt.figure(figsize=(10,5)) ax = fig.add_subplot(121) ind = 0 # Index for Albedo X1 = cdf_info_array[0,:,ind] F1 = cdf_info_array[1,:,ind] plt.title('CDF %s'%file_list[ind][0:-4]) ax.set_ylabel(file_list[ind][0:-4]) ax.set_xlabel('fraction of pixels') ax.set_xlim([0,1]) ax.plot(F1,X1) ax =...
2017 NEON DI JAMB Presentation.ipynb
yetiracing4als/2017-NEON-DI-JAMB
bsd-2-clause
<h3>CDF and PS Plots illustrating Canopy Height</h3>
# Run function and plot fig = plt.figure(figsize=(10,5)) ax = fig.add_subplot(121) ind = 3 # Index for Canopy Height X1 = cdf_info_array[0,:,ind] F1 = cdf_info_array[1,:,ind] plt.title('CDF %s'%file_list[ind][0:-4]) ax.set_ylabel(file_list[ind][0:-4]) ax.set_xlabel('fraction of pixels') ax.set_xlim([0,1]) ax.plot(F1,X1...
2017 NEON DI JAMB Presentation.ipynb
yetiracing4als/2017-NEON-DI-JAMB
bsd-2-clause
Lectura del video y conversión a arreglo de Numpy Ahora vamos a leer frame a frame del video y convertirlo a un arreglo Numpy de 4 dimensiones (cantFrames, filas, columnas, canales).
# convertir el video a un arreglo numpy print("Convirtiendo video a Numpy array...") # dimensiones cant_frames= vid_reader.get_length() dimensiones = (cant_frames, mdata['source_size'][1], mdata['source_size'][0], 3) # se crea un arreglo numpy de 4 dimensiones (nframes, filas, columnas, 3) video_np= np.zeros(dimensio...
courses/images/res/material_resuelto/05_video/Tutorial de Localización en videos.ipynb
facundoq/facundoq.github.io
gpl-2.0
Visualización del video Visualizamos algunos frames del video para verificar que se cargó correctamente.
# Definimos una función para visualizar un video en una figura def ver_video(video, cant_frames): # para que Jupyter haga el plot en una nueva figura %matplotlib qt5 plt.figure(figsize=(12,9)) # cuidado: dimensiones en PULGADAS for i in range(cant_frames): plt.imshow(video[i]) plt.show(...
courses/images/res/material_resuelto/05_video/Tutorial de Localización en videos.ipynb
facundoq/facundoq.github.io
gpl-2.0
Procesamiento del video de forma matricial Ahora que tenemos el video cargado como un arreglo Numpy, sólo debemos procesarlo avanzando frame por frame. Entonces, lo que haremos será recorrer cada frame, filtrando las imágenes como lo hicimos en el tutorial anterior. Para mayor modularización y legibilidad del código, u...
import skimage.morphology # para erosionar import matplotlib.colors # para convertir a HSV # esta función recibe una imagen RGB, #segmenta el objeto con rango de colores limites_HSV y # retorna una imagen binaria con el resultado def filtrar_imagen(img_rgb, limites_HSV): h_min,h_max= limites_HSV[0] s_...
courses/images/res/material_resuelto/05_video/Tutorial de Localización en videos.ipynb
facundoq/facundoq.github.io
gpl-2.0
Cálculo de centro de masa También lo hacemos de forma matricial.
# calcular centro de masa de la imagen pasada como argumento # retorna un vector numpy de dos elementos def calcular_centro_de_masa(mask_img): r,c=np.where(mask_img>0) #calculo las posiciones de los pixeles con valor 1 # El elemento `i` de r tiene la fila del pixel i con valor 1. Lo mismo para c con la col...
courses/images/res/material_resuelto/05_video/Tutorial de Localización en videos.ipynb
facundoq/facundoq.github.io
gpl-2.0
Incrustación de una imagen dentro de otra. De nuevo, proveemos una versión matricial del mismo algoritmo.
def dibujar_objeto_en_imagen(img, objeto, posicion): h_obj, w_obj, c= objeto.shape h_img, w_img, c= img.shape dim_obj=np.array([h_obj, w_obj]) #dimension # comienzo tiene las coordenadas de la esquina superior izquierda comienzo= np.array(posicion) - dim_obj//2 # fin tiene ...
courses/images/res/material_resuelto/05_video/Tutorial de Localización en videos.ipynb
facundoq/facundoq.github.io
gpl-2.0
Función para procesar un frame Al igual que antes, unimos las tres funciones anteriores para procesar un frame.
#Recibe: #frame: la imagen a procesar (por referencia) #limites_HSV: los rangos de color para encontrar el guante #img_objeto: la imagen del objeto que se va a superponer en la posición del guante def procesar_frame(frame,limites_HSV, img_objeto): #IMPLEMENTAR - COMIENZO #1) Calcular máscara de segmenta...
courses/images/res/material_resuelto/05_video/Tutorial de Localización en videos.ipynb
facundoq/facundoq.github.io
gpl-2.0
Procesamiento del video Ahora que tenés todas las funciones necesarias definidas, procesá el video frame a frame.
import skimage.io # para abrir imagenes # procesar el video frame a frame segmentando el guante de color rosa def procesar_video(video_np, limites_HSV, img_objeto): # copia del video original video_procesado= np.copy(video_np) # procesar todos los frames cant_frames= video_np.shape[0] for ...
courses/images/res/material_resuelto/05_video/Tutorial de Localización en videos.ipynb
facundoq/facundoq.github.io
gpl-2.0
Guardar video Por último, podemos guardar el video generado para tenerlo como archivo mp4.
def save_video(video_np, file_name): # abrir un writer video vid_writer = imageio.get_writer(file_name) # iterar sobre todos los frames for i in range(video_np.shape[0]): vid_writer.append_data(video_np[i]*255) # reconvertimos a escala 0-255 # cerrar writer vid_writer.close() # guar...
courses/images/res/material_resuelto/05_video/Tutorial de Localización en videos.ipynb
facundoq/facundoq.github.io
gpl-2.0
Using same width and height triggers the scroll bar
import folium width, height = 480, 350 fig = Figure(width=width, height=height) m = folium.Map( location=location, tiles=tiles, width=width, height=height, zoom_start=zoom_start ) fig.add_child(m) fig
examples/WidthHeight.ipynb
ocefpaf/folium
mit
Can figure take relative sizes?
width, height = "100%", 350 fig = Figure(width=width, height=height) m = folium.Map( location=location, tiles=tiles, width=width, height=height, zoom_start=zoom_start ) fig.add_child(m) fig
examples/WidthHeight.ipynb
ocefpaf/folium
mit
I guess not. (Well, it does make sense for a single HTML page, but not for iframes.)
width, height = 480, "100%" fig = Figure(width=width, height=height) m = folium.Map( location=location, tiles=tiles, width=width, height=height, zoom_start=zoom_start ) fig.add_child(m) fig
examples/WidthHeight.ipynb
ocefpaf/folium
mit
Not that Figure is interpreting this as 50px. We should raise something and be explicit on the docs.
width, height = "50%", "100%" fig = Figure(width=width, height=height) m = folium.Map( location=location, tiles=tiles, width=width, height=height, zoom_start=zoom_start ) fig.add_child(m) fig width, height = "150%", "100%" try: folium.Map( location=location, tiles=tiles, width=widt...
examples/WidthHeight.ipynb
ocefpaf/folium
mit
Maybe we should recommend
width, height = 480, 350 fig = Figure(width=width, height=height) m = folium.Map( location=location, tiles=tiles, width="100%", height="100%", zoom_start=zoom_start ) fig.add_child(m) fig
examples/WidthHeight.ipynb
ocefpaf/folium
mit
<br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> Now to a real world example! <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> Scotch! <img src="ht...
# Read the data file and drop the collumns we don't care about: whisky_dataframe = pd.read_csv( filepath_or_buffer="whiskies.csv", header=0, sep=',', index_col=1) whisky_dataframe.drop(['RowID', 'Postcode', ' Latitude', ' Longitude'], inplace=True, axis=1) whisky_dataframe.head(10)
Tutorial Notebook Development.ipynb
Joao-M-Almeida/ML-Tutorial
mit
<br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> Feature selection and extraction <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> PCA
random_data_1 =np.random.multivariate_normal( mean= [0, 0], cov=[[5, 5], [0, 0.5]], size=100) random_data_2 =np.random.multivariate_normal( mean= [6, 6], cov=[[5, 5], [0, 0.5]], size=100) random_data = np.concatenate([random_data_1, random_data_2], axis=0) random_labels = np.concatenate([np.ones((100,1)),np.z...
Tutorial Notebook Development.ipynb
Joao-M-Almeida/ML-Tutorial
mit
<br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> Model complexity and overfitting <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></b...
# Adapted from: http://scikit-learn.org/stable/auto_examples/linear_model/plot_polynomial_interpolation.html # Author: Mathieu Blondel # Jake Vanderplas # License: BSD 3 clause def f(x, noise=False): """ function to approximate by polynomial interpolation""" if(noise): return np.sin(x) + np.ra...
Tutorial Notebook Development.ipynb
Joao-M-Almeida/ML-Tutorial
mit
<br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> <br></br> Resources: Online Courses: Machine Learning by Andrew Ng Books: Master Algorithm Pattern Recognition and Machine Learning Articles: A few useful things to know about Machine Learning Tutorials: Kaggle: Predicting Survival r...
random_data = np.random.randn(100, 2) random_labels = np.random.randint(0,2,100) fig = plt.figure(figsize=(8,8)) plt.scatter(random_data[:, 0], random_data[:, 1], c=random_labels, cmap=cmap_bold) plt.xlabel('Random Dimension 1', fontsize=14) plt.ylabel('Random Dimension 2', fontsize=14) plt.show()
Tutorial Notebook Development.ipynb
Joao-M-Almeida/ML-Tutorial
mit
K-Nearest Neighbors Classifier
clf = KNeighborsClassifier(n_neighbors=1) clf = KNeighborsClassifier(n_neighbors=10) clf.fit(random_data, random_labels) print("Accuracy: {:0.3f}%".format( clf.score(random_data, random_labels)*100)) (xx, yy, Z) = Utils.predict_mesh(random_data, clf, h=0.01) fig = plt.figure(figsize=(8,8)) plt.xlabel('Rand...
Tutorial Notebook Development.ipynb
Joao-M-Almeida/ML-Tutorial
mit
There is so much more This can not even be considered scraping the surface. Go ahead and experiment it's a very interesting field, and there are tons of information and places to learn from!
whisky_data = pd.read_csv( filepath_or_buffer="Meta-Critic Whisky Database – Selfbuilts Whisky Analysis.csv") whisky_data.describe() whisky_data.head()
Tutorial Notebook Development.ipynb
Joao-M-Almeida/ML-Tutorial
mit
We want to plot the distribution of the mutations along the chromosomes, so, we first read the positions of the mutations (read from a random sample of 100,000 mutations)
from collections import defaultdict from ICGC_data_parser import SSM_Reader distribution = defaultdict(list) for record in SSM_Reader(filename='data/ssm_sample.vcf'): # Associate CHROMOSOME -> [MUTATION POSITIONS] distribution[record.CHROM].append(record.POS)
mutations_distribution_chroms.ipynb
Ad115/ICGC-data-parser
mit
We want to add information of the positions of the centromeric regions and the chromosome boundaries. We read this from the table data/chromosome-data.tsv
from collections import namedtuple # Create a custom class whose objects # hold information of a chromosome Chromosome = namedtuple('Chromosome', ['length', 'centromere_start', 'centromere_end']) import pandas as pd # Open the file with...
mutations_distribution_chroms.ipynb
Ad115/ICGC-data-parser
mit
To ensure the chromosomes are plotted in the correct order, we provide a list that defines that order
chrom_names = [str(i+1) for i in range(22)] + ['X', 'Y', 'MT']
mutations_distribution_chroms.ipynb
Ad115/ICGC-data-parser
mit
Finally, we can plot the mutations
for chrom in chrom_names: fig, ax = plt.subplots(figsize=(8, 2)) # Main plot ax.hist(distribution[chrom], bins=300) ax.set(title=f'Chromosome {chrom}') if chrom in chromosomes: # Fetch data on chromosome # length and centromere positions chrom_data = chromosomes[ch...
mutations_distribution_chroms.ipynb
Ad115/ICGC-data-parser
mit
Esperanto
import nltk print( nltk.corpus.udhr.raw('Esperanto-UTF8'))
Jupyter/languages_and_corpus.ipynb
Linguistics-DTU/DTU_8th_Sem_Project
gpl-3.0
Here, Give a brief overview of Esperanto Similarly we chose the UDHR in all the languages ( 22 in number ) shown in the following world map.
import os os.getcwd() from IPython.display import Image from IPython.core.display import HTML PATH = "C:\\Users\\user\\Desktop\\Language Space and Mind" Image(filename = PATH + "\\canvas.png", width=1000, height=500) list_of_languages = [ ['English', ['English-Latin1'] ] ,['Esperanto', ['Esperanto-UTF8']], ['Ger...
Jupyter/languages_and_corpus.ipynb
Linguistics-DTU/DTU_8th_Sem_Project
gpl-3.0
Print the UDHR from each language
import nltk for i in range(len(list_of_languages)): print('\x1b[1;34m'+list_of_languages[i][0] +'\x1b[0m') print("\n\n") print(nltk.corpus.udhr.raw(list_of_languages[i][1])) print("\n\n\n\n\n\n\n")
Jupyter/languages_and_corpus.ipynb
Linguistics-DTU/DTU_8th_Sem_Project
gpl-3.0
The Parts of Speech Tagging in these languages
import nltk for i in range(len(list_of_languages)): print(list_of_languages[i][0]) words = nltk.pos_tag(nltk.corpus.udhr.words(list_of_languages[i][1][0])) words_types = [] for i in range(len(words)): words_types.append(words[i][1]) print(set(words_types)) print("\n\n\n\n\n\n\n"...
Jupyter/languages_and_corpus.ipynb
Linguistics-DTU/DTU_8th_Sem_Project
gpl-3.0
Now we plot the graph as per the number of distinct elements in the POS set
import nltk ## now count the POS in each language num_pos = [] for i in range(len(list_of_languages)): print(list_of_languages[i][0]) words = nltk.pos_tag(nltk.corpus.udhr.words(list_of_languages[i][1][0])) words_types = [] for i in range(len(words)): words_types.append(words[i][1]) ...
Jupyter/languages_and_corpus.ipynb
Linguistics-DTU/DTU_8th_Sem_Project
gpl-3.0
Plot styling
import numpy as np import matplotlib.pyplot as plt plt.style.use('bmh') %matplotlib inline fig_size = plt.rcParams["figure.figsize"] fig_size[0] = 14 fig_size[1] = 9 plt.rcParams["figure.figsize"] = fig_size lang_names=[] for i in range(len(list_of_languages)): lang_names.append((list_of_languages[i][0]))
Jupyter/languages_and_corpus.ipynb
Linguistics-DTU/DTU_8th_Sem_Project
gpl-3.0
Here we plot the list of POS per language.
import matplotlib.pyplot as plt #plt.rcdefaults() import numpy as np plt.style.use('bmh') # Example data #people = set(words_types) x_pos = np.arange(len(list_of_languages)) plt.bar(x_pos, num_pos, color='r', align='center', alpha=0.4) plt.xticks(x_pos,lang_names , rotation=45 ) #plt.xlabel('Performance') #pl...
Jupyter/languages_and_corpus.ipynb
Linguistics-DTU/DTU_8th_Sem_Project
gpl-3.0
Clearly, the POS taggers in the basic library are not all of the same quality.
import matplotlib.pyplot as plt #plt.rcdefaults() import numpy as np plt.style.use('bmh') # Example data #people = set(words_types) x_pos = np.arange(len(list_of_languages)) barlist= plt.bar(x_pos, num_pos, color='r', align='center', alpha=0.4) for i in range(len(barlist)): if barlist[i].get_height() >=15: ...
Jupyter/languages_and_corpus.ipynb
Linguistics-DTU/DTU_8th_Sem_Project
gpl-3.0
For moving towards a better application of Category Theory onto the Esperanto language, we used a pre-tagged corpus for this purpose. Here is a sample of the pre-tagged data. We relied upon the regular structure of the language to form the requisite Regular-Expressions for this purpose.
text = ''' Longe/RB vivadis/VBD en/IN paco/NNS tiu/DT gento/NNS trankvila,/JJ de/IN Kristanismo/NNS netusita/JJ gis/IN dek-tria/dek-tria jarcento./NNS De/IN la/DT cetera/JJ mondo/NNS forkasita/JJ per/IN marcoj/NNP kaj/CC densaj/JJ arbaregoj,/NNP kie/RB kuras/VBP gis/IN nun/VB sovagaj/JJ urbovoj,/NNP la/DT popolo/NNS da...
Jupyter/languages_and_corpus.ipynb
Linguistics-DTU/DTU_8th_Sem_Project
gpl-3.0
Standardizing a SMILES string The standardize_smiles function provides a quick and easy way to get the standardized version of a given SMILES string:
from molvs import standardize_smiles standardize_smiles('C[n+]1c([N-](C))cccc1')
examples/standardization.ipynb
mcs07/MolVS
mit
While this is convenient for one-off cases, it's inefficient when dealing with multiple molecules and doesn't allow any customization of the standardization process. The Standardizer class The Standardizer class provides flexibility to specify custom standardization stages and efficiently standardize multiple molecules...
from rdkit import Chem import molvs from molvs import Standardizer mol = Chem.MolFromSmiles('[Na]OC(=O)c1ccc(C[S+2]([O-])([O-]))cc1') mol s = Standardizer() smol = s.standardize(mol) smol Chem.MolToSmiles(smol)
examples/standardization.ipynb
mcs07/MolVS
mit
The Standardizer class takes a number of initialization parameters to customize its behaviour:
from molvs.normalize import Normalization norms = ( Normalization('Nitro to N+(O-)=O', '[*:1][N,P,As,Sb:2](=[O,S,Se,Te:3])=[O,S,Se,Te:4]>>[*:1][*+1:2]([*-1:3])=[*:4]'), Normalization('Pyridine oxide to n+O-', '[n:1]=[O:2]>>[n+:1][O-:2]'), ) my_s = Standardizer(normalizations=norms) smol = my_s.standardize(mol)...
examples/standardization.ipynb
mcs07/MolVS
mit
Notice that the sulfone group wasn't normalized in this case, because when initializing the Standardizer we only specified two Normalizations. The default list of normalizations is molvs.normalize.NORMALIZATIONS. It is possible to resuse a Standardizer instance on many molecules once it has been initialized with some p...
my_s.standardize(Chem.MolFromSmiles('C1=C(C=C(C(=C1)O)C(=O)[O-])[S](O)(=O)=O.[Na+]')) my_s.standardize(Chem.MolFromSmiles('[Ag]OC(=O)O[Ag]'))
examples/standardization.ipynb
mcs07/MolVS
mit
In this table, there is one row for every transaction and a transaction_time column that specifies when the transaction took place. This means that transaction_time is the time index because it indicates when the information in each row became known and available for feature calculations. For now, ignore the _ft_last_t...
es['customers']
docs/source/getting_started/handling_time.ipynb
Featuretools/featuretools
bsd-3-clause
Here, we have two time columns, join_date and birthday. While either column might be useful for making features, the join_date should be used as the time index because it indicates when that customer first became available in the dataset. What is the Cutoff Time? The cutoff_time specifies the last point in time that a ...
fm, features = ft.dfs(entityset=es, target_dataframe_name='customers', cutoff_time=pd.Timestamp("2014-1-1 04:00"), instance_ids=[1,2,3], cutoff_time_in_index=True) fm
docs/source/getting_started/handling_time.ipynb
Featuretools/featuretools
bsd-3-clause
Even though the entityset contains the complete transaction history for each customer, only data with a time index up to and including the cutoff time was used to calculate the features above. Using a Cutoff Time DataFrame Oftentimes, the training examples for machine learning will come from different points in time. T...
cutoff_times = pd.DataFrame() cutoff_times['customer_id'] = [1, 2, 3, 1] cutoff_times['time'] = pd.to_datetime(['2014-1-1 04:00', '2014-1-1 05:00', '2014-1-1 06:00', '2014-1-1 08:00']) cutoff_times['label'] = [True, True, False, True...
docs/source/getting_started/handling_time.ipynb
Featuretools/featuretools
bsd-3-clause
We can now see that every row of the feature matrix is calculated at the corresponding time in the cutoff time dataframe. Because we calculate each row at a different time, it is possible to have a repeat customer. In this case, we calculated the feature vector for customer 1 at both 04:00 and 08:00. Training Window By...
window_fm, window_features = ft.dfs(entityset=es, target_dataframe_name="customers", cutoff_time=cutoff_times, cutoff_time_in_index=True, training_window="2 hour") window_fm
docs/source/getting_started/handling_time.ipynb
Featuretools/featuretools
bsd-3-clause
We can see that that the counts for the same feature are lower after we shorten the training window:
fm[["COUNT(transactions)"]] window_fm[["COUNT(transactions)"]]
docs/source/getting_started/handling_time.ipynb
Featuretools/featuretools
bsd-3-clause
Setting a Last Time Index The training window in Featuretools limits the amount of past data that can be used while calculating a particular feature vector. A row in the dataframe is filtered out if the value of its time index is either before or after the training window. This works for dataframes where a row occurs a...
last_time_index_col = es['sessions'].ww.metadata.get('last_time_index') es['sessions'][['session_start', last_time_index_col]].head()
docs/source/getting_started/handling_time.ipynb
Featuretools/featuretools
bsd-3-clause
Featuretools can automatically add last time indexes to every DataFrame in an Entityset by running EntitySet.add_last_time_indexes(). When using a training window, if a last_time_index has been set, Featuretools will check to see if the last_time_index is after the start of the training window. That, combined with the ...
df = es['transactions'] df[df["session_id"] == 1].head()
docs/source/getting_started/handling_time.ipynb
Featuretools/featuretools
bsd-3-clause
Looking at the data, transactions occur every 65 seconds. To check how include_cutoff_time effects training windows, we can calculate features at the time of a transaction while using a 65 second training window. This creates a training window with a transaction at both endpoints of the window. For this example, we'...
from featuretools.primitives import Sum sum_log = ft.Feature( es['transactions'].ww['amount'], parent_dataframe_name='sessions', primitive=Sum, ) cutoff_time = pd.DataFrame({ 'session_id': [1], 'time': ['2014-01-01 00:04:20'], }).astype({'time': 'datetime64[ns]'})
docs/source/getting_started/handling_time.ipynb
Featuretools/featuretools
bsd-3-clause
With include_cutoff_time=True, the oldest point in the training window (2014-01-01 00:03:15) is excluded and the cutoff time point is included. This means only transaction 371 is in the training window, so the sum of all transaction amounts is 31.54
# Case1. include_cutoff_time = True actual = ft.calculate_feature_matrix( features=[sum_log], entityset=es, cutoff_time=cutoff_time, cutoff_time_in_index=True, training_window='65 seconds', include_cutoff_time=True, ) actual
docs/source/getting_started/handling_time.ipynb
Featuretools/featuretools
bsd-3-clause
Whereas with include_cutoff_time=False, the oldest point in the window is included and the cutoff time point is excluded. So in this case transaction 116 is included and transaction 371 is exluded, and the sum is 78.92
# Case2. include_cutoff_time = False actual = ft.calculate_feature_matrix( features=[sum_log], entityset=es, cutoff_time=cutoff_time, cutoff_time_in_index=True, training_window='65 seconds', include_cutoff_time=False, ) actual
docs/source/getting_started/handling_time.ipynb
Featuretools/featuretools
bsd-3-clause
Approximating Features by Rounding Cutoff Times For each unique cutoff time, Featuretools must perform operations to select the data that’s valid for computations. If there are a large number of unique cutoff times relative to the number of instances for which we are calculating features, the time spent filtering data ...
import urllib.request as urllib2 opener = urllib2.build_opener() opener.addheaders = [('Testing', 'True')] urllib2.install_opener(opener) es_flight = ft.demo.load_flight(nrows=100) es_flight es_flight['trip_logs'].head(3)
docs/source/getting_started/handling_time.ipynb
Featuretools/featuretools
bsd-3-clause
For every trip log, the time index is date_scheduled, which is when the airline decided on the scheduled departure and arrival times, as well as what route will be flown. We don't know the rest of the information about the actual departure/arrival times and the details of any delay at this time. However, it is possible...
ct_flight = pd.DataFrame() ct_flight['trip_log_id'] = [14, 14, 92] ct_flight['time'] = pd.to_datetime(['2016-12-28', '2017-1-25', '2016-12-28']) ct_flight['label'] = [True, True, False] ct_flight
docs/source/getting_started/handling_time.ipynb
Featuretools/featuretools
bsd-3-clause
Now, let's calculate the feature matrix:
fm, features = ft.dfs(entityset=es_flight, target_dataframe_name='trip_logs', cutoff_time=ct_flight, cutoff_time_in_index=True, agg_primitives=["max"], trans_primitives=["month"],) fm[['flights.origin', 'flight...
docs/source/getting_started/handling_time.ipynb
Featuretools/featuretools
bsd-3-clause
Let's understand the output: A row was made for every id-time pair in ct_flight, which is returned as the index of the feature matrix. The output was sorted by cutoff time. Because of the sorting, it's often helpful to pass in a label with the cutoff time dataframe so that it will remain sorted in the same fashion ...
cutoff_times
docs/source/getting_started/handling_time.ipynb
Featuretools/featuretools
bsd-3-clause
Then passing in window_size='1h' and num_windows=2 makes one row an hour over the last two hours to produce the following new dataframe. The result can be directly passed into DFS to make features at the different time points.
temporal_cutoffs = ft.make_temporal_cutoffs(cutoff_times['customer_id'], cutoff_times['time'], window_size='1h', num_windows=2) temporal_cutoffs fm, features = ft.dfs(entityset=es, ...
docs/source/getting_started/handling_time.ipynb
Featuretools/featuretools
bsd-3-clause
<div class='alert alert-warning' style='width:600px; font-size:16px'> <h1>GLOBAL VARIABLE WARNING</h1> Here I download updated clinical data from the TCGA Data Portal. This is a secure site which uses HTTPS. I had to give it a path to my ca-cert for the download to work. Download a copy of a generic cacert.pem [h...
PATH_TO_CACERT = '/cellar/users/agross/cacert.pem'
Notebooks/get_all_MAFs.ipynb
theandygross/CancerData
mit
Download most recent files from MAF dashboard
out_path = OUT_PATH + '/MAFs_new_2/' if not os.path.isdir(out_path): os.makedirs(out_path) maf_dashboard = 'https://confluence.broadinstitute.org/display/GDAC/MAF+Dashboard' !curl --cacert $PATH_TO_CACERT $maf_dashboard -o tmp.html
Notebooks/get_all_MAFs.ipynb
theandygross/CancerData
mit
Use BeutifulSoup to parse out all of the links in the table
f = open('tmp.html', 'rb').read() soup = BeautifulSoup(f) r = [l.get('href') for l in soup.find_all('a') if l.get('href') != None and '.maf' in l.get('href')]
Notebooks/get_all_MAFs.ipynb
theandygross/CancerData
mit
Download all of the MAFs by following the links This takes a while, as I'm downloading all of the data. I read in the table first to count the number of comment lines and a second time to actuall load the data. Yes there is likely a more efficient way to do this, but I'm waiting on https://github.com/pydata/pandas/iss...
t = pd.read_table(f, nrows=10, sep='not_real_term', header=None, squeeze=True, engine='python') cols = ['Hugo_Symbol', 'NCBI_Build', 'Chromosome', 'Start_position', 'End_position', 'Strand', 'Reference_Allele', 'Tumor_Seq_Allele1', 'Tumor_Seq_Allele2', 'Tumor_Sample_...
Notebooks/get_all_MAFs.ipynb
theandygross/CancerData
mit
Reduce MAF down to most usefull columns
m4 = m3[cols] m4 = m4.reset_index() #m4.index = map(lambda s: s.split('/')[-1], m4.index) m4 = m4.drop_duplicates(subset=['Hugo_Symbol','Tumor_Sample_Barcode','Start_position']) m4 = m4.reset_index() m4.to_csv(out_path + 'mega_maf.csv')
Notebooks/get_all_MAFs.ipynb
theandygross/CancerData
mit
Get gene by patient mutation count matrix and save
m5 = m4.ix[m4.Variant_Classification != 'Silent'] cc = m5.groupby(['Hugo_Symbol','Tumor_Sample_Barcode']).size() cc = cc.reset_index() cc.to_csv(out_path + 'meta.csv') cc.shape
Notebooks/get_all_MAFs.ipynb
theandygross/CancerData
mit
...and looks something like this in Western music notation: We can convert that into a sequence of bits, with each 1 representing an onset, and 0 representing a rest (similar to the way a sequencer works). Doing so yields this: [1 0 0 1 0 0 1 0] ...which we can conveniently store as a list in Python. Actually, this is...
%matplotlib inline import math # Standard library imports import IPython.display as ipd, librosa, librosa.display, numpy as np, matplotlib.pyplot as plt # External libraries import pardir; pardir.pardir() # Allow imports from parent directory import bjorklund # Fork of Brian House's implementation of Bjorklund's algori...
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
We can listen to the pulses and steps together:
# Generate the clicks tresillo_pulse_clicks, tresillo_step_clicks = fibonaccistretch.generate_rhythm_clicks(tresillo_rhythm, tresillo_click_interval) tresillo_pulse_times, tresillo_step_times = fibonaccistretch.generate_rhythm_times(tresillo_rhythm, tresillo_click_interval) # Tresillo as an array print(tresillo_rhythm...
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
You can follow along with the printed array and hear that every 1 corresponds to a pulse, and every 0 to a step. In addition, let's define pulse lengths as the number of steps that each pulse lasts:
tresillo_pulse_lengths = fibonaccistretch.calculate_pulse_lengths(tresillo_rhythm) print("Tresillo pulse lengths: {}".format(tresillo_pulse_lengths))
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
Note that the tresillo rhythm's pulse lengths all fall along the Fibonacci sequence. This allows us do some pretty fun things, as we'll see in a bit. But first let's take a step back. Part 2 - Fibonacci rhythms 2.1 Fibonacci numbers The Fibonacci sequence is a particular sequence in which each value is the sum of the t...
fibonaccistretch.fibonacci??
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
And the first 20 numbers in the sequence are:
first_twenty_fibs = np.array([fibonaccistretch.fibonacci(n) for n in range(20)]) plt.figure(figsize=(16,1)) plt.scatter(first_twenty_fibs, np.zeros(20), c="r") plt.axis("off") print(first_twenty_fibs)
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
We can also use the golden ratio to find the index of a Fibonacci number:
fibonaccistretch.find_fibonacci_index?? fib_n = 21 fib_i = fibonaccistretch.find_fibonacci_index(fib_n) assert(fibonaccistretch.fibonacci(fib_i) == fib_n) print("{} is the {}th Fibonacci number".format(fib_n, fib_i))
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
We might classify it as a Fibonacci rhythm, since every one of its pulse lengths is a Fibonacci number. If we wanted to expand that rhythm along the Fibonacci sequence, what would that look like? An intuitive (and, as it turns out, musically satisfying) method would be to take every pulse length and simply replace it w...
expanded_pulse_lengths = fibonaccistretch.fibonacci_expand_pulse_lengths(tresillo_pulse_lengths) print("Expanded tresillo pulse lengths: {}".format(expanded_pulse_lengths))
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
We'll also want to be able to contract rhythms along the Fibonacci sequence (i.e. choose numbers in decreasing order instead of increasing order), as well as specify how many Fibonacci numbers away we want to end up. We can generalize this expansion and contraction into a single function that can scale pulse lengths:
# Note that `scale_amount` determines the direction and magnitude of the scaling. # If `scale_amount` > 0, it corresponds to a rhythmic expansion. # If `scale_amount` < 0, it corresponds to a rhythmic contraction. # If `scale_amount` == 0, the original scale is maintained and no changes are made. print("Tresillo pulse...
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
Of course, once we have these scaled pulse lengths, we'll want to be able to convert them back into rhythms, in our original array format:
# Scale tresillo rhythm by a variety of factors and plot the results for scale_factor, color in [(0, "r"), (1, "g"), (2, "b"), (-1, "y")]: scaled_rhythm = fibonaccistretch.fibonacci_scale_rhythm(tresillo_rhythm, scale_factor) scaled_pulse_indices = np.array([p_i for p_i,x in enumerate(scaled_rhythm) if x > 0 ])...
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
An important feature we want to extract from the audio is tempo (i.e. the time interval between steps). Let's estimate that using the librosa.beat.tempo method (which requires us to first detect onsets, or []):
tempo = fibonaccistretch.estimate_tempo(y, sr) print("Tempo (calculated): {}".format(tempo)) tempo = 93.0 # Hard-coded from prior knowledge print("Tempo (hard-coded): {}".format(tempo))
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
<div style="color:gray"> (We can see that the tempo we've estimated differs by approximately 1BPM from the tempo that we've hard-coded from prior knowledge. It's often the case that such automatic feature extraction tools and algorithms require a fair bit of fine-tuning, so we can improve our results by supplying some...
beat_times = fibonaccistretch.calculate_beat_times(y, sr, tempo) print("First 10 beat times (in seconds): {}".format(beat_times[:10]))
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
Using beats_per_measure we can calculate the times for the start of each measure:
# Work in samples from here on beat_samples = librosa.time_to_samples(beat_times, sr=sr) measure_samples = fibonaccistretch.calculate_measure_samples(y, beat_samples, beats_per_measure) print("First 10 measure samples: {}".format(measure_samples[:10]))
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
With these markers in place, we can now overlay the tresillo rhythm onto each measure and listen to the result:
fibonaccistretch.overlay_rhythm_onto_audio(tresillo_rhythm, y, measure_samples, sr=sr)
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
The clicks for measures, pulses, and steps, overlap with each other at certain points. While you can hear this based on the fact that each click is at a different frequency, it can be hard to tell visually in the above figure. We can make this more apparent by plotting each set of clicks with a different color. In the ...
fibonaccistretch.overlay_rhythm_onto_audio(tresillo_rhythm, y, measure_samples, sr=sr, click_colors={"measure": "r", "pulse": "g", "step": "b"})
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
You can hear that the tresillo rhythm's pulses line up with the harmonic rhythm of "Human Nature"; generally, we want to pick rhythms and audio tracks that have at least some kind of musical relationship. (We could actually try to estimate rhythmic patterns based on onsets and tempo, but that's for another time.) Part ...
original_rhythm = tresillo_rhythm target_rhythm = fibonaccistretch.fibonacci_scale_rhythm(original_rhythm, 1) # "Fibonacci scale" original rhythm by a factor of 1 print("Original rhythm: {}\n" "Target rhythm: {}".format(original_rhythm, target_rhythm))
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
4.2 Pulse ratios Given an original rhythm and target rhythm, we can compute their pulse ratios, that is, the ratio between each of their pulses:
pulse_ratios = fibonaccistretch.calculate_pulse_ratios(original_rhythm, target_rhythm) print("Pulse ratios: {}".format(pulse_ratios))
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
4.3 Modifying measures by time-stretching Since we're treating our symbolic rhythms as having the duration of one measure, it makes sense to start by modifying a single measure. Basically what we want to do is: for each pulse, get the audio chunk that maps to that pulse, and time-stretch it based on our calculated puls...
fibonaccistretch.modify_measure??
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
You'll notice that in the part where we choose stretch methods, there's a function called euclidean_stretch that we haven't defined. We'll get to that in just a second! For now, let's just keep that in the back of our heads, and not worry about it too much, so that we can hear what our modification method sounds like w...
first_measure_data = y[measure_samples[0]:measure_samples[1]] first_measure_modified = fibonaccistretch.modify_measure(first_measure_data, original_rhythm, target_rhythm, stretch_method="timestretch") ipd.A...
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
It doesn't sound like there's much difference between the stretched version and the original, does it? 4.4 Modifying an entire track by naively time-stretching each pulse To get a better sense, let's apply the modification to the entire audio track:
# Modify the track using naive time-stretch y_modified, measure_samples_modified = fibonaccistretch.modify_track(y, measure_samples, original_rhythm, target_rhythm, stretch_method="timestretch") plt.figure(figsize=(...
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
Listening to the whole track, only perceptible difference is that the last two beats of each measure are slightly faster. If we look at the pulse ratios again:
pulse_ratios = fibonaccistretch.calculate_pulse_ratios(original_rhythm, target_rhythm) print(pulse_ratios)
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
... we can see that this makes sense, as we're time-stretching the first two pulses by the same amount, and then time-stretching the last pulse by a different amount. (Note that while we're expanding our original rhythm along the Fibonacci sequence, this actually corresponds to a contraction when time-stretching. This ...
fibonaccistretch.overlay_rhythm_onto_audio(target_rhythm, y_modified, measure_samples, sr)
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
Looking at the first pulses of the original rhythm and target rhythm, we want to turn [1 0 0] into [1 0 0 0 0]. To accomplish this, we'll turn to the concept of Euclidean rhythms. 5.2 Generating Euclidean rhythms using Bjorklund's algorithm A Euclidean rhythm is a type of rhythm that can be generated based upon the Euc...
fibonaccistretch.euclid?? gcd = fibonaccistretch.euclid(8, 12) print("Greatest common divisor of 8 and 12 is {}".format(gcd))
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
You might have noticed that this rhythm is exactly the same as the rhythm produced by contracting the tresillo rhythm along the Fibonacci sequence by a factor of 1:
print(fibonaccistretch.fibonacci_scale_rhythm(tresillo_rhythm, -1))
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
The resulting pulse ratios are:
print(fibonaccistretch.calculate_pulse_ratios(original_pulse_rhythm, target_pulse_rhythm))
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
... which doesn't intuitively look like it would produce something any different from what we tried before. However, we might perceive a greater difference because: a) we're working on a more granular temporal level (subdivisions of pulses as opposed to measures), and b) we're adjusting an equally-spaced rhythm (e.g. [...
fibonaccistretch.euclidean_stretch??
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
Let's take a listen to how it sounds:
# Modify the track y_modified, measure_samples_modified = fibonaccistretch.modify_track(y, measure_samples, original_rhythm, target_rhythm, stretch_method="euclidean") plt.figure(fi...
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
Much better! With clicks:
fibonaccistretch.overlay_rhythm_onto_audio(target_rhythm, y_modified, measure_samples, sr)
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
As you can hear, the modified track's rhythm is in line with the clicks, and sounds noticeably different from the original song. This is a pretty good place to end up! Part 6 - Fibonacci stretch: implementation and examples 6.1 Implementation Here's an end-to-end implementation of Fibonacci stretch. A lot of the defaul...
fibonaccistretch.fibonacci_stretch_track??
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
Now we can simply feed the function a path to an audio file (as well as any parameters we want to customize). This is the exact method that's applied to the sneak peek at the final result up top. The only difference is that we use a 90-second excerpt rather than our original 30-second one:
# "Human Nature" stretched by a factor of 1 using default parameters fibonaccistretch.fibonacci_stretch_track("../data/humannature_90s.mp3", stretch_factor=1, tempo=93.0)
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
And indeed we get the exact same result. 6.2 Examples: customizing stretch factors Now that we have a function to easily stretch tracks, we can begin playing around with some of the parameters. Here's the 30-second "Human Nature" excerpt again, only this time it's stretched by a factor of 2 instead of 1:
# "Human Nature" stretched by a factor of 2 fibonaccistretch.fibonacci_stretch_track("../data/humannature_30s.mp3", tempo=93.0, stretch_factor=2, overlay_clicks=True)
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
As mentioned in part 2.2, we can contract rhythms as well using negative numbers as our stretch_factor. Let's try that with "Chan Chan" by the Buena Vista Social Club:
# "Chan Chan" stretched by a factor of -1 fibonaccistretch.fibonacci_stretch_track("../data/chanchan_30s.mp3", stretch_factor=-1, tempo=78.5)
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
(Note that although we do end up with a perceptible difference (the song now sounds like it's in 7/8), it should actually sound like it's in 5/8, since [1 0 0 1 0 0 1 0] is getting compressed to [1 0 1 0 1]. This is an implementation detail with the Euclidean stretch method that I need to fix.) 6.3 Examples: customizin...
# "I'm the One" stretched by a factor of 1 fibonaccistretch.fibonacci_stretch_track("../data/imtheone_cropped_chance_60s.mp3", tempo=162, original_rhythm=np.array([1,0,0,0,0,1,0,0]), stretch_factor=1)
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
We can define both a custom target rhythm as well. In addition, neither original_rhythm nor target_rhythm have to be Fibonacci rhythms for the stretch algorithm to work (although with this implementation they do both have to have the same number of pulses). Let's try that out with the same verse, going from an original...
# "I'm the One" in 5/4 fibonaccistretch.fibonacci_stretch_track("../data/imtheone_cropped_chance_60s.mp3", tempo=162, original_rhythm=np.array([1,0,0,0,0,1,0,0]), target_rhythm=np.array([1,0,0,0,0,1,0,0,0,0]), overlay_clicks...
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
As another example, we can give a swing feel to the first movement of Mozart's "Eine kleine Nachtmusik" (K. 525), as performed by A Far Cry:
# "Eine kleine Nachtmusik" with a swing feel fibonaccistretch.fibonacci_stretch_track("../data/einekleinenachtmusik_30s.mp3", tempo=130, original_rhythm=np.array([1,0,1,1]), target_rhythm=np.array([1,0,0,1,0,1]))
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
It works pretty decently until around 0:09, at which point the assumption of a metronomically consistent tempo breaks down. (This is one of the biggest weaknesses with the current implementation, and is something I definitely hope to work on in the future.) Let's also hear what "Chan Chan" sounds like in 5/4:
# "Chan Chan" in 5/4 fibonaccistretch.fibonacci_stretch_track("../data/chanchan_30s.mp3", tempo=78.5, original_rhythm=np.array([1,0,0,1,0,0,0,0]), target_rhythm=np.array([1,0,0,0,0,1,0,0,0,0])) # Also interesting to try with [1,0,1]
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
6.4 Examples: customizing input beats per measure We can also work with source audio in other meters. For example, Frank Ocean's "Pink + White" is in 6/8. Here I've stretched it into 4/4 using the rhythm of the bassline, but you can uncomment the other supplied parameters (or supply your own!) to hear how they sound as...
# "Pink + White" stretched by a factor of 1 fibonaccistretch.fibonacci_stretch_track("../data/pinkandwhite_30s.mp3", beats_per_measure=6, tempo=160, # 6/8 to 4/4 using bassline rhythm original_rhythm...
nbs/fibonaccistretch_using_module.ipynb
usdivad/fibonaccistretch
mit
Exercícios - Loops e Condiconais
# Exercício 1 - Crie uma estrutura que pergunte ao usuário qual o dia da semana. Se o dia for igual a Domingo ou # igual a sábado, imprima na tela "Hoje é dia de descanso", caso contrário imprima na tela "Você precisa trabalhar!" # Exercício 2 - Crie uma lista de 5 frutas e verifique se a fruta 'Morango' faz parte da...
Cap03/Notebooks/DSA-Python-Cap03-Exercicios-Loops-Condiconais.ipynb
dsacademybr/PythonFundamentos
gpl-3.0
Regression With a Single Feature Using a single feature to make a numerical prediction TO DO - nothing for the moment
# Share functions used in multiple notebooks %run Shared-Functions.ipynb
Notebooks/Regression-with-a-Single-Feature.ipynb
jsub10/MLCourse
mit
ACKNOWLEDGEMENT The dataset used in this notebook is from Andrew Ng's course on Machine Learning on Coursera. Linear regression has been in use for hundreds of years. What place does it have in the shiny (relatively) new field of machine learning? It's the same end result you've learned in statsistics classes, but in a...
# Load up the packages to investigate the data import numpy as np import pandas as pd import matplotlib.pyplot as plt import matplotlib.cm as cm %matplotlib inline import seaborn as sns import os # OS-independent way to navigate the file system # Data directory is one directory up in relation to directory of this note...
Notebooks/Regression-with-a-Single-Feature.ipynb
jsub10/MLCourse
mit
This means that the dataset has 97 rows and 2 columns. Let's see what the data looks like. The first few rows of our data look like this:
data.head()
Notebooks/Regression-with-a-Single-Feature.ipynb
jsub10/MLCourse
mit
Step 1: Visualize the Data
# Visualize the data data.plot.scatter(x='Population', y='Profit', figsize=(8,6));
Notebooks/Regression-with-a-Single-Feature.ipynb
jsub10/MLCourse
mit