markdown stringlengths 0 1.02M | code stringlengths 0 832k | output stringlengths 0 1.02M | license stringlengths 3 36 | path stringlengths 6 265 | repo_name stringlengths 6 127 |
|---|---|---|---|---|---|
Iterando sobre grupos | # Generemos los grupos uno por uno (primero nombre, luego valores)
df
for name, group in df.groupby('key1'):
print(name)
print(group)
# Para agrupaciones por llave múltiple, el resultado de iterar es una tupla
for (k1, k2), group in df.groupby(['key1', 'key2']):
print((k1, k2)) # tupla(tupla de combinación,... | _____no_output_____ | MIT | Pandas/DataAggregations.ipynb | EstebanBrito/TallerIA-SISI2019 |
Selección | # Azúcar sintáctica para df['data1'].groupby(df['key1'])
df.groupby('key1')['data1'] # Obj GrpBy como Series
df.groupby('key1')[['data2']] # Obj GrpBy como DF
# Resulta útil para revisar sólo una columna
df.groupby(['key1', 'key2'])[['data2']].mean()
# Obtenemos un Df si selccionamos dos o más columnas para revisar
s_g... | _____no_output_____ | MIT | Pandas/DataAggregations.ipynb | EstebanBrito/TallerIA-SISI2019 |
Agrupando con diccionarios | people = pd.DataFrame(np.random.randn(5,5),
columns=['a','b','c','d','e'],
index=['Joe','Steve','Wes','Jim','Travis'])
# Añadimos unos cuentos valores nulos
people.iloc[2:3, [1, 2]] = np.nan
people
# Secuencia grupo-valores
mapping = {'a': 'red', 'b': 'red', 'c': 'blue',
... | _____no_output_____ | MIT | Pandas/DataAggregations.ipynb | EstebanBrito/TallerIA-SISI2019 |
Agrupando con funciones | # Agrupación basada en el largo de los valores de columnas clave
# Aplicamos len() a cada valor y utilizamos el entero como valor criterio
people
people.groupby(len).sum()
# Un ejemplo más difícil
key_list = ['one','one','one','two','two']
people.groupby([len, key_list]).min() | _____no_output_____ | MIT | Pandas/DataAggregations.ipynb | EstebanBrito/TallerIA-SISI2019 |
Agregaciones (agg) Aggregations: Tranformación que convierta arrays en números (escalares)- count: ¿Cuántos existen?- sum: ¿Cuánto representan?- mean: Media- median: Mediana- std, var: Desv. est., varianza- min, max: Mínimos máximos- prod: Product de valores no nulos- first, last: Primer o último valoreCada una de e... | # Podemos usar funciones que no estén definidos en un GrpObj, pero sí en otras variables
# quantile() está definido en Series, no en GroupBy
df
grouped = df.groupby('key1')
grouped['data1'].quantile(0.9) # Posible valor en posicion 90 del rango
# No es una agregación, pero ilustra el uso de funciones no definidas
grou... | _____no_output_____ | MIT | Pandas/DataAggregations.ipynb | EstebanBrito/TallerIA-SISI2019 |
Más cosas sobre agregaciones | # Cargamos un dataset
tips = pd.read_csv('../examples/tips.csv')
# Añadimos columna de propinas como % de la cuenta total
tips['tip_pct'] = tips['tip'] / tips['total_bill']
tips[:6]
# Tal vez queramos usar diferentes agregaciones en diferentes columans
# ... o utilizar múltiples agregaciones en una columna
# Agrupamos... | _____no_output_____ | MIT | Pandas/DataAggregations.ipynb | EstebanBrito/TallerIA-SISI2019 |
Mapeo de Agregaciones | # Different agregaciones para diferentes columnas
# --> Obtenemos un DF jerárquico si al menos a una columna se le aplican múltiples agregaciones
grouped.agg({'tip': np.max, 'size': 'sum'}) # DF
grouped.agg({'tip_pct': ['min', 'max', 'mean', 'std'], 'size': 'sum'}) # DF jerárquico | _____no_output_____ | MIT | Pandas/DataAggregations.ipynb | EstebanBrito/TallerIA-SISI2019 |
apply() en objetos GroupBy De forma más general, podemos ejecutar la función apply sobre un objeto GroupBy. | # Función que trae el top N de filas con alto porcentaje propina/cuenta total
def top(df, n=5, column='tip_pct'):
return df.sort_values(by=column)[-n:]
# Las 6 filas con % propina/cuenta más grandes
top(tips, n=6)
# Trae las top 5 filas con mayor % de propinas, tanto de fumadores como de no fumadores
# Agrupa, lueg... | _____no_output_____ | MIT | Pandas/DataAggregations.ipynb | EstebanBrito/TallerIA-SISI2019 |
Import Pandas and the classifiers to experiment with | import pandas as pd
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.svm import SVC
from sklearn.linear_model import SGDClassifier, LogisticRegression
from sklearn.neural_network import MLPClassifier... | _____no_output_____ | Apache-2.0 | EvaluateEntireDataset.ipynb | kevindoyle93/fyp-ml-notebooks |
Evaluate models on test data | from sklearn import metrics
def evaluate_model(model, row_name):
training_df = pd.read_csv('data/individual_teams.csv', index_col=0)
test_df = pd.read_csv('data/test_data.csv', index_col=0)
target_feature = 'won_match'
training_columns = [col for col in training_df.columns if col != target_feature]
... | _____no_output_____ | Apache-2.0 | EvaluateEntireDataset.ipynb | kevindoyle93/fyp-ml-notebooks |
Classification with Neural Networks**Neural networks** are a powerful set of machine learning algorithms. Neural network use one or more **hidden layers** of multiple **hidden units** to perform **function approximation**. The use of multiple hidden units in one or more layers, allows neural networks to approximate co... | from sklearn.neural_network import MLPClassifier
from sklearn import preprocessing
#from statsmodels.api import datasets
from sklearn import datasets ## Get dataset from sklearn
import sklearn.model_selection as ms
import sklearn.metrics as sklm
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
imp... | _____no_output_____ | MIT | .ipynb_checkpoints/NeuralNetworks5-checkpoint.ipynb | shubhamchouksey/IRIS_Dataset |
To get a feel for these data, you will now load and plot them. The code in the cell below does the following:1. Loads the iris data as a Pandas data frame. 2. Adds column names to the data frame.3. Displays all 4 possible scatter plot views of the data. Execute this code and examine the results. | def plot_iris(iris):
'''Function to plot iris data by type'''
setosa = iris[iris['Species'] == 'setosa']
versicolor = iris[iris['Species'] == 'versicolor']
virginica = iris[iris['Species'] == 'virginica']
fig, ax = plt.subplots(2, 2, figsize=(12,12))
x_ax = ['Sepal_Length', 'Sepal_Width']
y_... | _____no_output_____ | MIT | .ipynb_checkpoints/NeuralNetworks5-checkpoint.ipynb | shubhamchouksey/IRIS_Dataset |
You can see that Setosa (in blue) is well separated from the other two categories. The Versicolor (in orange) and the Virginica (in green) show considerable overlap. The question is how well our classifier will seperate these categories. Scikit Learn classifiers require numerically coded numpy arrays for the features a... | Features = np.array(iris[['Sepal_Length', 'Sepal_Width', 'Petal_Length', 'Petal_Width']])
levels = {'setosa':0, 'versicolor':1, 'virginica':2}
Labels = np.array([levels[x] for x in iris['Species']]) | _____no_output_____ | MIT | .ipynb_checkpoints/NeuralNetworks5-checkpoint.ipynb | shubhamchouksey/IRIS_Dataset |
Next, execute the code in the cell below to split the dataset into test and training set. Notice that unusually, 100 of the 150 cases are being used as the test dataset. | ## Randomly sample cases to create independent training and test data
nr.seed(1115)
indx = range(Features.shape[0])
indx = ms.train_test_split(indx, test_size = 100)
X_train = Features[indx[0],:]
y_train = np.ravel(Labels[indx[0]])
X_test = Features[indx[1],:]
y_test = np.ravel(Labels[indx[1]]) | _____no_output_____ | MIT | .ipynb_checkpoints/NeuralNetworks5-checkpoint.ipynb | shubhamchouksey/IRIS_Dataset |
As is always the case with machine learning, numeric features must be scaled. The code in the cell below performs the following processing:1. A Zscore scale object is defined using the `StandarScaler` function from the Scikit Learn preprocessing package. 2. The scaler is fit to the training features. Subsequently, thi... | scale = preprocessing.StandardScaler()
scale.fit(X_train)
X_train = scale.transform(X_train) | _____no_output_____ | MIT | .ipynb_checkpoints/NeuralNetworks5-checkpoint.ipynb | shubhamchouksey/IRIS_Dataset |
Now you will define and fit a neural network model. The code in the cell below defines a single hidden layer neural network model with 50 units. The code uses the MLPClassifer function from the Scikit Lean neural_network package. The model is then fit. Execute this code. | nr.seed(1115)
nn_mod = MLPClassifier(hidden_layer_sizes = (50,))
nn_mod.fit(X_train, y_train) | /home/ins/anaconda3/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.
% self.max_iter, ConvergenceWarning)
| MIT | .ipynb_checkpoints/NeuralNetworks5-checkpoint.ipynb | shubhamchouksey/IRIS_Dataset |
Notice that the many neural network model object hyperparameters are displayed. Optimizing these parameters for a given situation can be quite time consuming. Next, the code in the cell below performs the following processing to score the test data subset:1. The test features are scaled using the scaler computed for th... | X_test = scale.transform(X_test)
scores = nn_mod.predict(X_test) | _____no_output_____ | MIT | .ipynb_checkpoints/NeuralNetworks5-checkpoint.ipynb | shubhamchouksey/IRIS_Dataset |
It is time to evaluate the model results. Keep in mind that the problem has been made difficult deliberately, by having more test cases than training cases. The iris data has three species categories. Therefore it is necessary to use evaluation code for a three category problem. The function in the cell below extends c... | def print_metrics_3(labels, scores):
conf = sklm.confusion_matrix(labels, scores)
print(' Confusion matrix')
print(' Score Setosa Score Versicolor Score Virginica')
print('Actual Setosa %6d' % conf[0,0] + ' %5d' % conf[0,1] + ' %5d' % ... | Confusion matrix
Score Setosa Score Versicolor Score Virginica
Actual Setosa 34 1 0
Actual Versicolor 0 25 9
Actual Vriginica 0 2 29
Accuracy 0.88
S... | MIT | .ipynb_checkpoints/NeuralNetworks5-checkpoint.ipynb | shubhamchouksey/IRIS_Dataset |
Examine these results. Notice the following:1. The confusion matrix has dimension 3X3. You can see that most cases are correctly classified. 2. The overall accuracy is 0.88. Since the classes are roughly balanced, this metric indicates relatively good performance of the classifier, particularly since it was only traine... | def plot_iris_score(iris, y_test, scores):
'''Function to plot iris data by type'''
## Find correctly and incorrectly classified cases
true = np.equal(scores, y_test).astype(int)
## Create data frame from the test data
iris = pd.DataFrame(iris)
levels = {0:'setosa', 1:'versicolor', 2:'virgi... | _____no_output_____ | MIT | .ipynb_checkpoints/NeuralNetworks5-checkpoint.ipynb | shubhamchouksey/IRIS_Dataset |
Examine these plots. You can see how the classifier has divided the feature space between the classes. Notice that most of the errors occur in the overlap region between Virginica and Versicolor. This behavior is to be expected. There is an error in classifying Setosa which is a bit surprising, and which probably arise... | nr.seed(1115)
nn_mod = MLPClassifier(hidden_layer_sizes = (100,100),
max_iter=300)
nn_mod.fit(X_train, y_train)
scores = nn_mod.predict(X_test)
print_metrics_3(y_test, scores)
plot_iris_score(X_test, y_test, scores) | Confusion matrix
Score Setosa Score Versicolor Score Virginica
Actual Setosa 35 0 0
Actual Versicolor 0 29 5
Actual Vriginica 0 2 29
Accuracy 0.93
S... | MIT | .ipynb_checkpoints/NeuralNetworks5-checkpoint.ipynb | shubhamchouksey/IRIS_Dataset |
The `stackplot()` function of [matplotlib](http://python-graph-gallery.com/matplotlib/) allows to make [stacked area chart](http://python-graph-gallery.com/stacked-area-plot/). It provides a **baseline** argument that allows to custom the position of the areas around the baseline. Four possibilities are exist, and they... | # libraries
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# Create data
X = np.arange(0, 10, 1)
Y = X + 5 * np.random.random((5, X.size))
# There are 4 types of baseline we can use:
baseline = ["zero", "sym", "wiggle", "weighted_wiggle"]
# Let's make 4 plots, 1 for each baseline
for n, ... | _____no_output_____ | 0BSD | src/notebooks/252-baseline-options-for-stacked-area-chart.ipynb | s-lasch/The-Python-Graph-Gallery |
BASIC MODEL Classes | #import libraries
import numpy as np
import pandas as pd
import pylab as plt
import time
import glob, os, os.path
import osmnx as ox
import networkx as nx
import cv2
class Vehicle:
"""A vehicle class"""
count = 0
vehicles = []
def __init__(self,verb=False):
self.v_uid = self.count #unique id
self.v_hom... | _____no_output_____ | MIT | python/multimodal.ipynb | erick2307/multimodal-evac |
Parameters For bounding box (bbox) of other areas:[OSM](https://www.openstreetmap.org/exportmap=5/33.907/138.460) | #BBOX
#small area is faster (by default this is in the class)
arahama = {'north': 38.2271, 'south': 38.2077,
'east': 140.9894, 'west': 140.9695}
#this is the same extension from Abe san's simulation
kochi = {'north': 33.5978428707312631, 'south': 33.3844862625877710,
'east': 133.7029719124942346,... | _____no_output_____ | MIT | python/multimodal.ipynb | erick2307/multimodal-evac |
Running the model | %%time
#create a model class of an area with number of vehicles and population
M = Model(bbox=kochi,ntype='walk',vehicles=10,population=50,verb=False)
%%time
#plot current situation ('id' is an integer)
M.plot(id=1,save=True,show=True)
%%time
#number of steps for simulation, can plot and make video same time
M.go(sim_t... | CPU times: user 841 ms, sys: 42.7 ms, total: 884 ms
Wall time: 920 ms
| MIT | python/multimodal.ipynb | erick2307/multimodal-evac |
Overlaying the inundation data | #this is the same extension from Abe san's simulation
kochi = {'north': 33.5978428707312631, 'south': 33.3844862625877710,
'east': 133.7029719124942346, 'west': 133.3254475395832799}
e = Environment(kochi)
fig = e.e_plot()
nodes = e.e_get_nodes()
edges = e.e_get_edges()
import rasterio
from rasterio.plot im... | _____no_output_____ | MIT | python/multimodal.ipynb | erick2307/multimodal-evac |
Load Data | sho = netCDF4.Dataset('../data/sho_friction2.nc').variables
t_sho = np.array(sho['t'][:], dtype=np.float32)
s_sho = np.array(sho['s'][:], dtype=np.float32)
v_sho = np.array(sho['v'][:], dtype=np.float32)
plt.figure(figsize=(10, 6), dpi=150)
plt.plot(t_sho, s_sho)
plt.show()
plt.figure(figsize=(10, 6), dpi=150)
plt.plot... | _____no_output_____ | MIT | project/tgkim/newton/notebooks/NewtonRNN (PyTorch - SHOF).ipynb | SYTEARK/ML2022 |
Data descriptionThis dataset represents a sample of 30 days of Criteo live traffic data. Each line corresponds to one impression (a banner) that was displayed to a user. For each banner we have detailed information about the context, if it was clicked, if it led to a conversion and if it led to a conversion that was a... | import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from sklearn.utils import resample
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
import keras
plt.style.use('ggplot')
# Initial data preparation
def add_derived_columns(df):
df_ext = df.... | [145440]
jid
count
2 58572
3 27400
4 15811
5 9922
6 6998
7 4915
8 3739
9 3020
10 2319
11 1907
| Apache-2.0 | promotions/channel-attribution-lstm.ipynb | fengwangjiang/algorithmic-examples |
Last Touch Attribution | def last_touch_attribution(df):
def count_by_campaign(df):
counters = np.zeros(n_campaigns)
for campaign_one_hot in df['campaigns'].values:
campaign_id = np.argmax(campaign_one_hot)
counters[campaign_id] = counters[campaign_id] + 1
return counters
ca... | _____no_output_____ | Apache-2.0 | promotions/channel-attribution-lstm.ipynb | fengwangjiang/algorithmic-examples |
Logistic Regression | def features_for_logistic_regression(df):
def pairwise_max(series):
return np.max(series.tolist(), axis = 0).tolist()
aggregation = {
'campaigns': pairwise_max,
'cats': pairwise_max,
'click': 'sum',
'cost': 'sum',
'conversion': 'max'
}
df_agg = ... | _____no_output_____ | Apache-2.0 | promotions/channel-attribution-lstm.ipynb | fengwangjiang/algorithmic-examples |
Basic LSTM | def features_for_lstm(df, max_touchpoints):
df_proj = df[['jid', 'campaigns', 'cats', 'click', 'cost', 'time_since_last_click_norm', 'timestamp_norm', 'conversion']]
x2d = df_proj.values
x3d_list = np.split(x2d[:, 1:], np.cumsum(np.unique(x2d[:, 0], return_counts=True)[1])[:-1])
x3d ... | Train on 93081 samples, validate on 23271 samples
Epoch 1/5
93081/93081 [==============================] - 135s 1ms/step - loss: 0.3033 - acc: 0.8677 - val_loss: 0.2547 - val_acc: 0.8899
Epoch 2/5
93081/93081 [==============================] - 152s 2ms/step - loss: 0.2603 - acc: 0.8879 - val_loss: 0.2322 - val_acc: 0.9... | Apache-2.0 | promotions/channel-attribution-lstm.ipynb | fengwangjiang/algorithmic-examples |
LSTM with Attention | from keras.models import Sequential
from keras.layers import Dense, LSTM, Input, Lambda, RepeatVector, Permute, Flatten, Activation, Multiply
from keras.constraints import NonNeg
from keras import backend as K
from keras.models import Model
n_steps, n_features = np.shape(x)[1:3]
hidden_units = 64
main_input = Input... | Train on 93081 samples, validate on 23271 samples
Epoch 1/5
93081/93081 [==============================] - 110s 1ms/step - loss: 0.2293 - acc: 0.9018 - val_loss: 0.2162 - val_acc: 0.9110
Epoch 2/5
93081/93081 [==============================] - 109s 1ms/step - loss: 0.2001 - acc: 0.9152 - val_loss: 0.2038 - val_acc: 0.9... | Apache-2.0 | promotions/channel-attribution-lstm.ipynb | fengwangjiang/algorithmic-examples |
Analysis of LSTM-A Model | def get_campaign_id(x_journey_step):
return np.argmax(x_journey_step[0:n_campaigns])
attention_model = Model(inputs=model.input, outputs=model.get_layer('attention_weigths').output)
a = attention_model.predict(x_train)
attributions = np.zeros(n_campaigns)
campaign_freq = np.ones(n_campaigns)
for i, journey in en... | _____no_output_____ | Apache-2.0 | promotions/channel-attribution-lstm.ipynb | fengwangjiang/algorithmic-examples |
Simulation | # Key assumption: If one of the campaigns in a journey runs out of budget,
# then the conversion reward is fully lost for the entire journey
# including both past and future campaigns
def simulate_budget_roi(df, budget_total, attribution, verbose=False):
budgets = np.ceil(attribution * (budget_total / np.sum(attr... | _____no_output_____ | Apache-2.0 | promotions/channel-attribution-lstm.ipynb | fengwangjiang/algorithmic-examples |
Introducción a la manipulación de datos con pandas> Una de las mejores cosas de Python (especialmente si eres un analista de datos) es la gran cantidad de librerías de alto nivel que se encuentran disponibles.> Algunas de estas librerías se encuentran en la librería estándar, es decir, se pueden encontrar donde sea qu... | # Importar pandas
| _____no_output_____ | MIT | Sesion1/2pandas.ipynb | gdesirena/Taller_UNSIS |
Los **pd.DataFrames** son los objetos por excelencia de pandas para manipular datos. Son eficientes y rápidos. Son la estructura de datos donde pandas carga los diferentes formatos de datos: cuando nuestros datos están limpios y estructurados, cada fila representa una observación, y cada columna una variable o caracter... | # Ayuda en la función pd.read_csv()
| _____no_output_____ | MIT | Sesion1/2pandas.ipynb | gdesirena/Taller_UNSIS |
Importemos los datos: | # Importar house_pricing.csv
# Observar los datos
# Tipo de lo que importamos
| _____no_output_____ | MIT | Sesion1/2pandas.ipynb | gdesirena/Taller_UNSIS |
Hagamos que el índice represente el identificador de cada casa: ___ 2. Indización y selección de datosHay muchas formas de las cuales podemos seleccionar datos de DataFrames. Veremos, de acuerdo al artículo al final de este documento, la forma basada en corchetes ([]) y en los métodos `loc()` y `iloc()`.Con los corchet... | # data[0:2:1]
# data[0:2]
| _____no_output_____ | MIT | Sesion1/2pandas.ipynb | gdesirena/Taller_UNSIS |
Ahora, seleccionar de la casa 7 en adelante: Finalmente, seleccionar las casas en las filas impares: Similarmente, para una selección de columnas, podemos usar una lista con los nombres de las columnas requeridas. | # Seleccionar la columna n_bedrooms
| _____no_output_____ | MIT | Sesion1/2pandas.ipynb | gdesirena/Taller_UNSIS |
Finalmente, seleccionamos dos columnas: | # Seleccionar las columnas n_bedrooms y size
| _____no_output_____ | MIT | Sesion1/2pandas.ipynb | gdesirena/Taller_UNSIS |
Muy bien, ya vimos que los corchetes son útiles. También existen los poderosos métodos `loc` y `iloc`, que nos dan el poder de seleccionar ambos a la vez: columnas y filas.¿En qué se diferencian?- El método `loc` nos permite seleccionar filas y columnas de nuestros datos basados en etoquetas. Primero, se deben especifi... | # Resetear índice en el lugar
# Reasignar índice alfabético
| _____no_output_____ | MIT | Sesion1/2pandas.ipynb | gdesirena/Taller_UNSIS |
Ahora sí.Seleccionemos la primer casa con ambos métodos: | # Primer casa con loc
# Primer casa con iloc
| _____no_output_____ | MIT | Sesion1/2pandas.ipynb | gdesirena/Taller_UNSIS |
Ahora, seleccionemos las casas A y C con ambos métodos: | # Casas A y C con loc
# Casas A y C con iloc
| _____no_output_____ | MIT | Sesion1/2pandas.ipynb | gdesirena/Taller_UNSIS |
Ahora, de las casas B y E, queremos sus tamaños y sus números de recámaras: | # loc
# iloc
| _____no_output_____ | MIT | Sesion1/2pandas.ipynb | gdesirena/Taller_UNSIS |
Ahora, queremos solo los tamaños y los precios, pero de todas las casas: | # loc
# iloc
| _____no_output_____ | MIT | Sesion1/2pandas.ipynb | gdesirena/Taller_UNSIS |
¿Qué tal? Ya tenemos varias formas de seleccionar e indexar ciertos datos.Esto es, sin duda, muy útil. Por otra parte, muchas veces queremos obtener cierta información (clientes, en nuestro ejemplo) que cumplan algunos requisitos. Por ejemplo:- que sean mayores de 18 años, o- que su antiguedad en la plataforma sea meno... | # Media
# Mediana
# Desviación estándar
# Resumen general
| _____no_output_____ | MIT | Sesion1/2pandas.ipynb | gdesirena/Taller_UNSIS |
Guessing the Parameters for a PC-SAFT Model of 1k3f (VORATEC SD 301) PolyolBegun July 24, 2021 to produce plots for ICTAM 2020+1.**NOTE: ALL CALCULATIONS SHOWN HERE USE N = 41, BUT TO BE CONSISTENT WITH N = 123 FOR 3K2F (2700 G/MOL), SHOULD USE N = 45 (~123/2.7)** | %load_ext autoreload
%autoreload 2
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import dataproc
import plot
from importlib import reload
# System parameters
# molecular weight of CO2
mw_co2 = 44.01
# conversion of m3 per mL
m3_per_mL = 1E-6
# Save plots?
save_plots = True
# file path to sa... | _____no_output_____ | MIT | notebooks/1k3f_pc-saft_params.ipynb | andylitalo/g-adsa |
Loads data into dictionary. | d = dataproc.load_proc_data(csv_file_list, data_folder) | _____no_output_____ | MIT | notebooks/1k3f_pc-saft_params.ipynb | andylitalo/g-adsa |
Compare results of prediction with guessed PC-SAFT parameters to actual data. Solubility | tk_fs = 18
ax_fs = 20
# folder of csv data files showing sensitivity of DFT predictions to PC-SAFT parameters
dft_sensitivity_folder = 'dft_pred//1k3f_30c_sensitivity//'
# loads dft predictions into a similarly structured dictionary
d_dft = dataproc.load_dft(dft_sensitivity_folder)
fig = plt.figure(figsize=(12, 4))... | Analyzing dft_pred//1k3f_30c_sensitivity\1k3f_30c.csv
Analyzing dft_pred//1k3f_30c_sensitivity\epsn_229-3~sigma_3-01.csv
Analyzing dft_pred//1k3f_30c_sensitivity\epsn_233-0~sigma_3-01.csv
Analyzing dft_pred//1k3f_30c_sensitivity\epsn_263-0~sigma_3-17.csv
Analyzing dft_pred//1k3f_60c_sensitivity\1k3f_60c.csv
Analyzing d... | MIT | notebooks/1k3f_pc-saft_params.ipynb | andylitalo/g-adsa |
Interfacial Tension | reload(dataproc)
reload(plot)
ax_fs = 20
tk_fs = 18
# folder of csv data files showing sensitivity of DFT predictions to PC-SAFT parameters
dft_sensitivity_folder = 'dft_pred//1k3f_30c_sensitivity//'
# loads dft predictions into a similarly structured dictionary
d_dft = dataproc.load_dft(dft_sensitivity_folder)
fig... | _____no_output_____ | MIT | notebooks/1k3f_pc-saft_params.ipynb | andylitalo/g-adsa |
Specific Volume | # folder of csv data files showing sensitivity of DFT predictions to PC-SAFT parameters
dft_sensitivity_folder = 'dft_pred//1k3f_30c_sensitivity//'
# loads dft predictions into a similarly structured dictionary
d_dft = dataproc.load_dft(dft_sensitivity_folder)
fig = plt.figure(figsize=(12, 4))
ax = fig.add_subplot(11... | Analyzing dft_pred//1k3f_30c_sensitivity\1k3f_30c.csv
Analyzing dft_pred//1k3f_30c_sensitivity\epsn_229-3~sigma_3-01.csv
Analyzing dft_pred//1k3f_30c_sensitivity\epsn_233-0~sigma_3-01.csv
Analyzing dft_pred//1k3f_30c_sensitivity\epsn_263-0~sigma_3-17.csv
Analyzing dft_pred//1k3f_60c_sensitivity\1k3f_60c.csv
Analyzing d... | MIT | notebooks/1k3f_pc-saft_params.ipynb | andylitalo/g-adsa |
Importações básicas | import requests
from bs4 import BeautifulSoup
from PIL import Image
from io import BytesIO
| _____no_output_____ | MIT | aulas/salvando-img-previsao-do-tempo/codigos/parte_2.ipynb | eddyrodrigues/cursou.github.io |
Início código | wblink='https://www.weather.gov/okx/winter#tab-2'
wblink2='https://www.weather.gov'
req=requests.get(wblink)
html_code = BeautifulSoup(req.text, 'html.parser')
#help(html_code)
html_code.find_all(id="stsImg", limit=1)
html_code.find_all(id="stsImg", limit=1)[0].attrs['src']
link_img = html_code.find_all(id="stsImg", l... | _____no_output_____ | MIT | aulas/salvando-img-previsao-do-tempo/codigos/parte_2.ipynb | eddyrodrigues/cursou.github.io |
Comparando imagens | arquivo_temp = open("old_forecast1.PNG", "rb")
D1 = arquivo_temp.read()
arquivo2 = open('previsao_neve.png', 'rb')
D2 = arquivo2.read()
if D1 == img_bytes:
print("True")
else:
print("False")
if (D1 == img_bytes):
print("Iguais.")
else:
print("Não são iguais.")
arquivo2.close()
arquivo_temp.close(... | _____no_output_____ | MIT | aulas/salvando-img-previsao-do-tempo/codigos/parte_2.ipynb | eddyrodrigues/cursou.github.io |
Codigo Novo com intervalo | import time
while(True):
img_req = requests.get(wb_final_img)
img_bytes = img_req.content
tipos = ['jpg', 'png', 'PNG', 'JPG', 'JPEG', 'jpeg']
tipo_final = ''
for tipo in tipos:
if(tipo in str(img_bytes)):
tipo_final = tipo
tipo_final
print("tipo do arquivo_baixado", tip... | tipo do arquivo_baixado PNG
Reiniciando o código
tipo do arquivo_baixado PNG
Reiniciando o código
tipo do arquivo_baixado PNG
Reiniciando o código
tipo do arquivo_baixado PNG
Reiniciando o código
tipo do arquivo_baixado PNG
Reiniciando o código
tipo do arquivo_baixado PNG
Reiniciando o código
tipo do arquivo_baixado PN... | MIT | aulas/salvando-img-previsao-do-tempo/codigos/parte_2.ipynb | eddyrodrigues/cursou.github.io |
**Artificial Intelligence - MSc**CS6134 - MACHINE LEARNING APPLICATIONS Instructor: Enrique NaredoCS6134_Exercise_2.3 Imports | # import libraries
from sklearn.linear_model import LogisticRegression
from pandas import DataFrame
from mlxtend.plotting import plot_decision_regions
import matplotlib.pyplot as plt
import numpy as np | _____no_output_____ | BSD-3-Clause | Notebooks/Week-2/CS6134_Exercise_2_3.ipynb | EnriqueNaredoGarcia/UL-CS6134 |
Real Dataset | from sklearn import datasets
# Loading some example data
iris = datasets.load_iris()
X2 = iris.data[:, [0, 2]]
y2 = iris.target
# create a data frame
df = DataFrame(dict(x=X2[:,0], y=X2[:,1], label=y2))
# three classes: 'cyan', 'brown', 2:'yellow'
colors =
# figure
fig, ax =
grouped = .groupby('label')
# scatter ... | _____no_output_____ | BSD-3-Clause | Notebooks/Week-2/CS6134_Exercise_2_3.ipynb | EnriqueNaredoGarcia/UL-CS6134 |
Training & Test Data | from sklearn.model_selection import train_test_split
# training: 70%-30%
X_train,X_test,y_train,y_test =
X_train[0:10] | _____no_output_____ | BSD-3-Clause | Notebooks/Week-2/CS6134_Exercise_2_3.ipynb | EnriqueNaredoGarcia/UL-CS6134 |
Logistic Regresion * [Logistic regression](https://en.wikipedia.org/wiki/Logistic_regression) is a statistical model that in its basic form uses a logistic function to model a binary dependent variable, although many more complex extensions exist. * In regression analysis, logistic regression (or logit regression) is ... | # Logistic Regression model
LR_model = LogisticRegression()
# fit to data a Logistic Regresion model
LR_model.fit() | _____no_output_____ | BSD-3-Clause | Notebooks/Week-2/CS6134_Exercise_2_3.ipynb | EnriqueNaredoGarcia/UL-CS6134 |
Decision boundary from [Wikipedia](https://en.wikipedia.org/wiki/Decision_boundary): * A decision boundary is the region of a problem space in which the output label of a classifier is ambiguous.* If the decision surface is a hyperplane, then the classification problem is linear, and the classes are linearly separable.... | # Plotting the decision boundary
# from the LogisticRegression model
plot_decision_regions()
| /usr/local/lib/python3.7/dist-packages/mlxtend/plotting/decision_regions.py:244: MatplotlibDeprecationWarning: Passing unsupported keyword arguments to axis() will raise a TypeError in 3.3.
ax.axis(xmin=xx.min(), xmax=xx.max(), y_min=yy.min(), y_max=yy.max())
| BSD-3-Clause | Notebooks/Week-2/CS6134_Exercise_2_3.ipynb | EnriqueNaredoGarcia/UL-CS6134 |
Predictions | # make predictions (assign class labels)
y_pred =
# show the inputs and predicted outputs
for i in range(len()):
# create a data frame
df_new = DataFrame()
# show 12 rows
df_new.head(_)
# three classes: 'red', 'blue', 'green'
# figure
fig2, ax2
# new data
grouped = df_new.groupby()
# scatter plot
for key2, gr... | /usr/local/lib/python3.7/dist-packages/mlxtend/plotting/decision_regions.py:244: MatplotlibDeprecationWarning: Passing unsupported keyword arguments to axis() will raise a TypeError in 3.3.
ax.axis(xmin=xx.min(), xmax=xx.max(), y_min=yy.min(), y_max=yy.max())
| BSD-3-Clause | Notebooks/Week-2/CS6134_Exercise_2_3.ipynb | EnriqueNaredoGarcia/UL-CS6134 |
US - Baby Names Introduction:We are going to use a subset of [US Baby Names](https://www.kaggle.com/kaggle/us-baby-names) from Kaggle. In the file it will be names from 2004 until 2014 Step 1. Import the necessary libraries | import pandas as pd
import numpy as np | _____no_output_____ | BSD-3-Clause | 06_Stats/US_Baby_Names/Exercises.ipynb | Gioparra91/Pandas-exercise |
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/06_Stats/US_Baby_Names/US_Baby_Names_right.csv). Step 3. Assign it to a variable called baby_names. | baby_names = pd.read_csv("https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/06_Stats/US_Baby_Names/US_Baby_Names_right.csv")
baby_names.tail() | _____no_output_____ | BSD-3-Clause | 06_Stats/US_Baby_Names/Exercises.ipynb | Gioparra91/Pandas-exercise |
Step 4. See the first 10 entries | baby_names.head(10) | _____no_output_____ | BSD-3-Clause | 06_Stats/US_Baby_Names/Exercises.ipynb | Gioparra91/Pandas-exercise |
Step 5. Delete the column 'Unnamed: 0' and 'Id' | #del baby_names['Unnamed: 0']
del baby_names['Id'] | _____no_output_____ | BSD-3-Clause | 06_Stats/US_Baby_Names/Exercises.ipynb | Gioparra91/Pandas-exercise |
Step 6. Is there more male or female names in the dataset? | baby_names["Gender_n"] = baby_names.Gender.map({"F":0,"M":1})
male = baby_names["Gender_n"].sum()
perc_male = male/len(baby_names.Gender)
perc_male # more female | _____no_output_____ | BSD-3-Clause | 06_Stats/US_Baby_Names/Exercises.ipynb | Gioparra91/Pandas-exercise |
Step 7. Group the dataset by name and assign to names | del baby_names["Year"]
names = baby_names.groupby(by="Name").sum()
names.sort_values("Count", ascending = 0).head() | _____no_output_____ | BSD-3-Clause | 06_Stats/US_Baby_Names/Exercises.ipynb | Gioparra91/Pandas-exercise |
Step 8. How many different names exist in the dataset? | len(names) | _____no_output_____ | BSD-3-Clause | 06_Stats/US_Baby_Names/Exercises.ipynb | Gioparra91/Pandas-exercise |
Step 9. What is the name with most occurrences? | names.Count.idxmax() | _____no_output_____ | BSD-3-Clause | 06_Stats/US_Baby_Names/Exercises.ipynb | Gioparra91/Pandas-exercise |
Step 10. How many different names have the least occurrences? | len(names[names.Count == names.Count.min()]) | _____no_output_____ | BSD-3-Clause | 06_Stats/US_Baby_Names/Exercises.ipynb | Gioparra91/Pandas-exercise |
Step 11. What is the median name occurrence? | names[names.Count == names.Count.median()] | _____no_output_____ | BSD-3-Clause | 06_Stats/US_Baby_Names/Exercises.ipynb | Gioparra91/Pandas-exercise |
Step 12. What is the standard deviation of names? | names.Count.std() | _____no_output_____ | BSD-3-Clause | 06_Stats/US_Baby_Names/Exercises.ipynb | Gioparra91/Pandas-exercise |
Step 13. Get a summary with the mean, min, max, std and quartiles. | names.describe() | _____no_output_____ | BSD-3-Clause | 06_Stats/US_Baby_Names/Exercises.ipynb | Gioparra91/Pandas-exercise |
This IPython Notebook contains simple examples of the line function. To clear all previously rendered cell outputs, select from the menu: Cell -> All Output -> Clear | import numpy as np
from six.moves import zip
from bokeh.plotting import figure, show, output_notebook
N = 4000
x = np.random.random(size=N) * 100
y = np.random.random(size=N) * 100
radii = np.random.random(size=N) * 1.5
colors = ["#%02x%02x%02x" % (int(r), int(g), 150) for r, g in zip(50+2*x, 30+2*y)]
output_notebook()... | _____no_output_____ | Apache-2.0 | pkgs/bokeh-0.11.1-py27_0/Examples/bokeh/plotting/notebook/color_scatterplot.ipynb | wangyum/anaconda |
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
from IPython import display
animate=True
def vehicle(v,t,u,load):
# inputs
# v = vehicle velocity (m/s)
# t = time (sec)
# u = gas pedal position (-50% to 100%)
# load = passenger load + cargo (kg)
... | _____no_output_____ | MIT | velocity_control_PID.ipynb | dchatterjee/control-systems-playbook | |
pc_to_texPointCloudShaderに必要な、点群データをテクスチャに書き込むスクリプト[](https://colab.research.google.com/github/Kuwamai/pc_to_tex/blob/main/pc_to_tex.ipynb) 使い方 準備1. こちらの記事を参考に点群データを用意する * [ソーシャルVRに点群を持ち込みたい - クワマイでもできる](https://kuwamai.hatenablog.com/entry/... | import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from PIL import Image
import struct
from google.colab import drive
drive.mount("/content/drive") | _____no_output_____ | MIT | pc_to_tex.ipynb | Kuwamai/pc_to_tex |
設定1. 下記変数の`file_path`に、アップしたGoogle drive上のパスを記入する * 例えば`マイドライブ`に`pc_to_tex`フォルダを作った場合は`drive/My Drive/pc_to_tex/`になる1. `pc_name`にアップした点群のファイル名を記入する1. `column_names`にそれぞれの列の名称$(x, y, z, r, g, b)$を記入する * 記事と同じ手順なら変更の必要なし * 座標系が異なる、法線など他のデータが含まれる際は適宜編集する1. `tex_width`に生成するテクスチャの幅を記入 * 記入した値の2乗の数の点が1枚のテクスチャに保存... | file_path = "drive/My Drive/pc_to_tex/"
pc_name = "ShibuyaUnderground.asc"
column_names = ("x", "z", "y", "ignore", "r", "g", "b")
tex_width = 1024
skiprows = 1
center_pos = pd.DataFrame([(-11830.2, -37856, 3.82242)], columns=["x", "z", "y"])
pc = pd.read_table(file_path + pc_name, sep=" ", header=None, names=column_na... | ↓データとヘッダー名が合ってるか確認
| MIT | pc_to_tex.ipynb | Kuwamai/pc_to_tex |
点位置をテクスチャに書き込む | def save_tex(r, c, tex_width, tex_num):
pos_tex = np.pad(c, ((0, tex_width * 2 - c.shape[0]), (0,0), (0,0)), "constant")
pos_tex = Image.fromarray(np.uint8(np.round(pos_tex)))
pos_tex.save(file_path + "pos" + str(tex_num) + ".png")
tex_num = 0
pc[["x", "y", "z"]] = pc[["x", "y", "z"]] - center_pos[["x", "y"... | _____no_output_____ | MIT | pc_to_tex.ipynb | Kuwamai/pc_to_tex |
色をテクスチャに書き込む | cols = pc[["r", "g", "b"]].values.reshape([-1, 1, 3])
tex_num = np.ceil((len(cols)) / tex_width ** 2)
tex_length = int(tex_num * (tex_width ** 2) - len(cols))
col_texs = np.pad(cols, ((0,tex_length),(0,0),(0,0)), "constant")
col_texs = np.array_split(col_texs, tex_num)
for i, tex in enumerate(col_texs):
col_tex = ... | _____no_output_____ | MIT | pc_to_tex.ipynb | Kuwamai/pc_to_tex |
large cells | # simple code
2 + 2
# simple variable
x = 3
# print it
print x
# a function
def print_hello(name) :
print 'hello {0}'.format(name)
print_hello("bro") | hello bro
| MIT | notebooks/my first notebook.ipynb | maguas01/titanic |
help key board short cuts to find all the keyboard short cut, esc + l print line numbers | !python --version
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot(range(100)) | _____no_output_____ | MIT | notebooks/my first notebook.ipynb | maguas01/titanic |
matplotlib helps you plot stuff on the jupyther notebook | %time x = range(10000) | Wall time: 1 ms
| MIT | notebooks/my first notebook.ipynb | maguas01/titanic |
time your functions with time | %%timeit x = range(10000)
max(x) | 1000 loops, best of 3: 221 µs per loop
| MIT | notebooks/my first notebook.ipynb | maguas01/titanic |
writes to files | %%writefile test.txt
this is some stuff that i wrote form jupiter notebook | Writing test.txt
| MIT | notebooks/my first notebook.ipynb | maguas01/titanic |
ls list directory | %ls
| Volume in drive F has no label.
Volume Serial Number is C816-9EA8
Directory of F:\anacondaWD
23-Jul-18 00:19 <DIR> .
23-Jul-18 00:19 <DIR> ..
23-Jul-18 00:00 <DIR> .ipynb_checkpoints
23-Jul-18 00:19 16,013 My First Notebook.ipynb
23-Jul-18 00:18 54... | MIT | notebooks/my first notebook.ipynb | maguas01/titanic |
we can use latex in jupyter with %latex | %%latex
\begin{align}
Gradient: \nabla J = -2H^T (Y -HW)
\end{align} | _____no_output_____ | MIT | notebooks/my first notebook.ipynb | maguas01/titanic |
load_ext loads extensions | %%!
pip install ipython-sql
%load_ext sql
%sql sqlite://
%%sql
create table classT(name, age, marks);
insert into classT values("bob", 22, 99);
insert into classT values("tom", 21, 88);
%sql select * from classT; | * sqlite://
Done.
| MIT | notebooks/my first notebook.ipynb | maguas01/titanic |
List all of the magic functions %lsmagic | %lsmagic
import sklearn.tree
help(sklearn.tree._tree.Tree) | Help on class Tree in module sklearn.tree._tree:
class Tree(__builtin__.object)
| Array-based representation of a binary decision tree.
|
| The binary tree is represented as a number of parallel arrays. The i-th
| element of each array holds information about the node `i`. Node 0 is the
| tree's root. You c... | MIT | notebooks/my first notebook.ipynb | maguas01/titanic |
0.0. IMPORTS | import math
import inflection
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from IPython.core.display import HTML
from IPython.display import Image
from datetime import datetime, timedelta | _____no_output_____ | MIT | m03_v01_store_sales_predict.ipynb | artavale/Rossman-Forcast-Sales |
0.1. Helper Functions 0.2. Loading data | df_sale_raw = pd.read_csv( 'base de dados/train.csv', low_memory=False)
df_store_raw = pd.read_csv( 'base de dados/store.csv', low_memory=False)
df_sale_raw.sample()
df_store_raw.sample()
df_raw = pd.merge( df_sale_raw, df_store_raw, how='left', on='Store')
df_raw.sample() | _____no_output_____ | MIT | m03_v01_store_sales_predict.ipynb | artavale/Rossman-Forcast-Sales |
1.0. PASSO 01 - DESCRICAO DOS DADOS | df1 = df_raw.copy() | _____no_output_____ | MIT | m03_v01_store_sales_predict.ipynb | artavale/Rossman-Forcast-Sales |
1.1. Rename Columns | df1.columns
cols_old = ['Store', 'DayOfWeek', 'Date', 'Sales', 'Customers', 'Open', 'Promo',
'StateHoliday', 'SchoolHoliday', 'StoreType', 'Assortment',
'CompetitionDistance', 'CompetitionOpenSinceMonth',
'CompetitionOpenSinceYear', 'Promo2', 'Promo2SinceWeek',
'Promo2SinceYe... | _____no_output_____ | MIT | m03_v01_store_sales_predict.ipynb | artavale/Rossman-Forcast-Sales |
1.2. Data Dimensions | print( f'Number of Rows: {df1.shape[0]}')
print( f'Number of Columns: {df1.shape[1]}') | Number of Rows: 1017209
Number of Columns: 18
| MIT | m03_v01_store_sales_predict.ipynb | artavale/Rossman-Forcast-Sales |
1.3. Data Types | df1['date'] = pd.to_datetime( df1['date'] )
df1.dtypes | _____no_output_____ | MIT | m03_v01_store_sales_predict.ipynb | artavale/Rossman-Forcast-Sales |
1.4. Check NA | df1.isna().sum() | _____no_output_____ | MIT | m03_v01_store_sales_predict.ipynb | artavale/Rossman-Forcast-Sales |
1.5. Fillout NA | df1.sample()
# competition_distance
df1['competition_distance'] = df1['competition_distance'].apply( lambda x: 200000.0 if math.isnan( x ) else x )
# competition_open_since_month
df1['competition_open_since_month'] = df1.apply( lambda x: x['date'].month if math.isnan( x['competition_open_since_month'] ) else x[... | _____no_output_____ | MIT | m03_v01_store_sales_predict.ipynb | artavale/Rossman-Forcast-Sales |
1.6. Change Types | df1.dtypes
df1['competition_open_since_month'] = df1['competition_open_since_month'].astype( int )
df1['competition_open_since_year'] = df1['competition_open_since_year'].astype( int )
df1['promo2_since_week'] = df1['promo2_since_week'].astype( int )
df1['promo2_since_year'] = df1['promo2_since_year'].astype( int ) | _____no_output_____ | MIT | m03_v01_store_sales_predict.ipynb | artavale/Rossman-Forcast-Sales |
1.7. Descriptive Statistical | num_attributes = df1.select_dtypes( include= [ 'int64', 'float64'] )
cat_attributes = df1.select_dtypes( exclude= [ 'int64', 'float64', 'datetime64[ns]'] )
cat_attributes.sample() | _____no_output_____ | MIT | m03_v01_store_sales_predict.ipynb | artavale/Rossman-Forcast-Sales |
1.7.1. Numerical Attributes | # Central Tendency - mean, median
ct1 = pd.DataFrame( num_attributes.apply( np.mean ) ).T
ct2 = pd.DataFrame( num_attributes.apply( np.median ) ).T
# Dispersion - std, min, max, range, skew, kurtois
d1 = pd.DataFrame( num_attributes.apply( np.std) ).T
d2 = pd.DataFrame( num_attributes.apply( min ) ).T
d3 = pd.DataF... | /opt/anaconda3/envs/store_sales_predict/lib/python3.8/site-packages/seaborn/distributions.py:2557: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level fu... | MIT | m03_v01_store_sales_predict.ipynb | artavale/Rossman-Forcast-Sales |
1.7.2. Categorical Attributes | cat_attributes.apply( lambda x: x.unique().shape[0])
aux1 = df1[(df1['state_holiday'] != '0' ) & (df1['sales'] > 0 )]
plt.subplot(1, 3, 1)
sns.boxplot( x='state_holiday' ,y='sales' , data=aux1 )
plt.subplot(1, 3, 2)
sns.boxplot( x='store_type' ,y='sales' , data=aux1 )
plt.subplot(1, 3, 3)
sns.boxplot( x='assortment'... | _____no_output_____ | MIT | m03_v01_store_sales_predict.ipynb | artavale/Rossman-Forcast-Sales |
2.0. PASSO 02 - FEATURE ENGINEERING | df2 = df1.copy()
Image( 'images/MIndMapHypothesis.png') | _____no_output_____ | MIT | m03_v01_store_sales_predict.ipynb | artavale/Rossman-Forcast-Sales |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.