markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
|---|---|---|---|---|
Problem 1 - Cross-validation (10 pts)
Cross-validation is the method that we can use to choose hyperparameters, such as the ridge penalty $\lambda$ in ridge regression. It is an empirical technique that is incredibly effective, even if the data does not exactly fit the assumptions of your model (e.g. if the errors are not gaussian, etc.). You can read more about cross-validation [here](https://en.wikipedia.org/wiki/Cross-validation_(statistics%29).
As a basic scheme, imagine that you have data that is split into two parts: training and test. Let's suppose you try a bunch of different ridge parameters by fitting models using the training data. If you check how well each model predicts the training data (that you used to generate the weights), then you'll get a bad answer: most likely the smallest ridge parameter will work best (you saw this in the last homework). If you check how well each model predicts the test data, then you're cheating: you're touching the test data before you're done model fitting. Don't cheat by fitting hyperparameters using the test data!! So obviously we need another way to check the ridge parameters.
Now suppose that you further split the training dataset into two parts: the "training" set (yes it's the same name, get over it), and a "validation" set. Now you can train your models using the training data, check how well they work using the validation dataset, and use those numbers to pick your hyperparameter. Then, finally, you can use the whole training dataset and the best hyperparameter to fit a model that you'll use to try to predict the test data (which you haven't touched until now). No cheating! This is cross-validation.
In this problem, you are going to use three different cross-validation methods to select $\lambda$ for ridge regression.
BIG HINT: The np.delete function is your best friend for this problem.
|
# Load data!
p1a_file = np.load('homework_2_p1a_data.npz')
x = p1a_file['x']
y = p1a_file['y']
x_test = p1a_file['x_test']
y_test = p1a_file['y_test']
n_training, n_features = x.shape
|
homeworks/homework_2/homework_2.ipynb
|
alexhuth/n4cs-fa2017
|
gpl-3.0
|
(a) Leave-one-out (LOO) cross-validation (3 pts)
This is perhaps the simplest cross-validation method. Select one datapoint from the training set to be the validation set, then train on the other $n-1$ datapoints. Test on the remaining datapoint. Repeat this process for each datapoint, and then average the results.
This works very well when the datapoints are independent, which is going to be the case in this first dataset.
|
lambdas = np.logspace(-3, 5, 10)
val_ses = np.zeros((n_training, len(lambdas)))
def ridge(x, y, lam):
"""This function does ridge regression with the stimuli x and responses y with
ridge parameter lam (short for lambda). It returns the weights.
This is definitely not the most efficient way to do this, but it's fine for now.
"""
n_features = x.shape[1]
beta_ridge = np.linalg.inv(x.T.dot(x) + lam * np.eye(n_features)).dot(x.T).dot(y)
return beta_ridge
for t in range(n_training):
# split the training dataset into two parts: one with only point t,
# and one with all the other datapoints
x_trn = ## YOUR CODE HERE ##
y_trn = ## YOUR CODE HERE ##
x_val = ## YOUR CODE HERE ##
y_val = ## YOUR CODE HERE ##
for ii in range(len(lambdas)):
# fit model using x_trn & predict y_hal
y_val_hat = ## YOUR CODE HERE ##
# store squared error in val_mses
val_ses[t,ii] = ## YOUR CODE HERE ##
# Plot the mean squared error with each lambda, averaged across validation datapoints
## YOUR CODE HERE ##
# Choose the best lambda (i.e. the one with the least validation error), print it
best_lambda = ## YOUR CODE HERE ##
print("Best lambda:", best_lambda)
# Fit a model using the whole training set and the best lambda
beta_hat = ## YOUR CODE HERE ##
# Use that model to predict y_test
y_test_hat = ## YOUR CODE HERE ##
# Compute the MSE on the test dataset, print it
test_mse = ## YOUR CODE HERE ##
print("Test MSE:", test_mse)
|
homeworks/homework_2/homework_2.ipynb
|
alexhuth/n4cs-fa2017
|
gpl-3.0
|
(b) $k$-fold cross-validation (3 pts)
You may have noticed that LOO CV is quite slow. That's because it needs to fit one model per datapoint. If you have a lot of datapoint, that's just not gonna work! Further, as mentioned above, LOO only works well when your datapoints are independent. Suppose you're recording fMRI data. You know that the underlying BOLD signal is very low frequency--meaning that the individual datapoints are anything but independent.
Another option is $k$-fold cross-validation. In this scheme, you split the training dataset into $k$ parts. Then you use $k-1$ of the parts to train the model, and use the $k$'th part as your validation dataset. Repeat this, holding out each part in turn. This means you only need to fit $k$ sets of models, instead of one for each datapoint.
If your entire training dataset has $n$ datapoints, each "fold" should contain $\frac{n}{k}$ datapoints, and each datapoint should only be in one fold. Assuming that $n/k$ is an integer, a nice way to do this is to break up the training dataset into $k$ contiguous parts.
|
k = 6 # let's do 6 folds
n_per_fold = n_training / k # number of datapoints per fold
lambdas = np.logspace(-3, 5, 10)
val_mses = np.zeros((k, len(lambdas)))
for fold in range(k):
# split the training dataset into two parts: one with only the points in fold "fold"
# and one with all the other datapoints
## YOUR CODE HERE ##
x_trn = ## YOUR CODE HERE ##
y_trn = ## YOUR CODE HERE ##
x_val = ## YOUR CODE HERE ##
y_val = ## YOUR CODE HERE ##
for ii in range(len(lambdas)):
# fit model using x_trn & predict y_val
y_val_hat = ## YOUR CODE HERE ##
# store squared error in val_mses
val_mses[fold,ii] = ## YOUR CODE HERE ##
# Plot the MSE for each lambda, averaged across the folds
## YOUR CODE HERE ##
# Choose the best lambda, print it
best_lambda = ## YOUR CODE HERE ##
print("Best lambda:", best_lambda)
# Fit a model using the whole training set and the best lambda
beta_hat = ## YOUR CODE HERE ##
# Use that model to predict y_test
y_test_hat = ## YOUR CODE HERE ##
# Compute MSE on the test dataset, print it
test_mse = ## YOUR CODE HERE ##
print("Test MSE:", test_mse)
|
homeworks/homework_2/homework_2.ipynb
|
alexhuth/n4cs-fa2017
|
gpl-3.0
|
(c) Monte Carlo cross-validation (4 pts)
One issue with $k$-fold CV is that the size of the validation set depends on the number of folds. If you want really stable estimates for your hyperparameter, you want to have a pretty large validation set, but also do a lot of folds. You can accomplish this by, on each iteration, randomly assigning some fraction of the training set to be the validation set.
|
n_mc_iters = 50 # let's do 50 Monte Carlo iterations
n_per_mc_iter = 50 # on each MC iteration, hold out 50 datapoints to be the validation set
lambdas = np.logspace(-3, 5, 10)
val_mses = np.zeros((n_training, len(lambdas)))
for it in range(n_mc_iters):
# split the training dataset into two parts: one with a random selection of n_per_mc_iter points
# and one with all the other datapoints
## YOUR CODE HERE ##
x_trn = ## YOUR CODE HERE ##
y_trn = ## YOUR CODE HERE ##
x_val = ## YOUR CODE HERE ##
y_val = ## YOUR CODE HERE ##
for ii in range(len(lambdas)):
# fit model using x_trn & predict y_val
# predict y_val
y_val_hat = ## YOUR CODE HERE ##
# store squared error in val_mses
val_mses[it,ii] = ## YOUR CODE HERE ##
# Plot the MSE for each lambda, averaged across the MC iterations
## YOUR CODE HERE ##
# Choose the best lambda, print it
best_lambda = ## YOUR CODE HERE ##
print("Best lambda:", best_lambda)
# Fit a model using the whole training set and the best lambda
beta_hat = ## YOUR CODE HERE ##
# Use that model to predict y_test
y_test_hat = ## YOUR CODE HERE ##
# Compute the MSE, print it
test_mse = ## YOUR CODE HERE ##
print("Test MSE:", test_mse)
|
homeworks/homework_2/homework_2.ipynb
|
alexhuth/n4cs-fa2017
|
gpl-3.0
|
Problem 2 - Multiple feature spaces (20 pts)
Suppose you've done an experiment and measured responses to a bunch of stimuli. You've got three different hypotheses about how the stimuli might be represented in the responses. You instantiate these hypotheses as three different linearizing transforms, giving you three different sets of features that you can extract: $X_1$, $X_2$, and $X_3$ (in variables called x1, x2, and x3).
Feature space $X_1$ has 12 features, $X_2$ has 50 features, and $X_3$ has 100 features.
Note that you've recorded $m=35$ different responses (i.e. $Y$ is an $n\times m$ matrix). Think of these as $m$ different neurons or $m$ different voxels.
|
from homework_2_utils import make_data
num_training = 500 # total number of datapoints in training set
num_test = 100 # total number of datapoints in test set
num_features = [12, 50, 100] # number of features in each feature space
num_responses = 35 # number of responses (voxels or neurons)
# This is just a bunch of constans. don't worry about what they mean for now
combs = [[0, 3, 4, 6],
[1, 3, 5, 6],
[2, 4, 5, 6]]
true_variances = np.array([300, 0, 1500, 250, 250, 4000, 500]).astype(float)
total_variance = 0.3
true_variances = true_variances / true_variances.sum() * total_variance
noise_variance = 1 - true_variances.sum()
P_parts = [3] * 7
Pnoise_models = [P - np.array(P_parts)[c].sum() for P,c in zip(num_features, combs)]
# Generate the data!
[x1_total, x2_total, x3_total], Y_total = make_data(num_training, num_test, P_parts,
num_responses, true_variances,
noise_variance, combs, Pnoise_models,
num_features)
x1 = x1_total.T[:num_training]
x2 = x2_total.T[:num_training]
x3 = x3_total.T[:num_training]
Y = Y_total[:num_training]
x1_test = x1_total.T[num_training:]
x2_test = x2_total.T[num_training:]
x3_test = x3_total.T[num_training:]
Y_test = Y_total[num_training:]
print('x1 shape:', x1.shape)
print('x2 shape:', x2.shape)
print('x3 shape:', x3.shape)
print('Y shape:', Y.shape)
print('x1_test shape:', x1_test.shape)
print('x2_test shape:', x2_test.shape)
print('x3_test shape:', x3_test.shape)
print('Y_test shape:', Y_test.shape)
|
homeworks/homework_2/homework_2.ipynb
|
alexhuth/n4cs-fa2017
|
gpl-3.0
|
(a) - Deciding which is best (8 pts)
Construct separate linear models using each of the three different feature spaces. Use those models to predict responses in the test set. Compute the prediction performance of each model. Which model is the best overall? (Don't worry about statistical comparison, just choose the one with the lowest average MSE across the responses.)
You should fit a model for each feature space using ridge regression. Use cross-validation (pick your flavor) to select the best ridge parameter separately for each. It's fine here to use the same ridge parameter for each of the $m$ responses.
|
lambdas = np.logspace(1, 7, 15) # use these lambdas
## YOUR CODE HERE ##
# Plot mean validation set MSE as a function of lambda for each feature space
## YOUR CODE HERE ##
# Find best lambda for each feature space
best_lambda_1 = ## YOUR CODE HERE ##
best_lambda_2 = ## YOUR CODE HERE ##
best_lambda_3 = ## YOUR CODE HERE ##
print best_lambda_1, best_lambda_2, best_lambda_3
# Fit models with best lambdas and entire training set
beta_1 = ## YOUR CODE HERE ##
beta_2 = ## YOUR CODE HERE ##
beta_3 = ## YOUR CODE HERE ##
# Predict Y_test with each, compute MSE
mse = lambda a, b: ((a - b)**2).mean()
test_mse_1 = ## YOUR CODE HERE ##
test_mse_2 = ## YOUR CODE HERE ##
test_mse_3 = ## YOUR CODE HERE ##
print test_mse_1, test_mse_2, test_mse_3
|
homeworks/homework_2/homework_2.ipynb
|
alexhuth/n4cs-fa2017
|
gpl-3.0
|
(b) - Variance partitioning (12 pts)
Each of these feature spaces explain something about the response. Does the variance explained by each feature space overlap with the others? Do variance partitioning to find out!
There should be 7 variance partitions: one for the unique contribution of each feature space, one for the variance explained by each pair of feature spaces, and one for the variance explained by all three spaces.
To get the sizes of these partitions, you'll need to fit models with each combination of feature spaces (each one alone, each pair, and all three together), then compute $R^2$ for each model, then use some simple algebra to compute the size of each partition.
|
# Here's a function that computes R^2 of a_hat predicting a
Rsq = lambda a, a_hat: 1 - (a - a_hat).var() / a.var()
# Here it's probably useful to define a cv_ridge function that you can call a bunch of times
def cv_ridge(x, y, x_test, y_test):
lambdas = np.logspace(1, 7, 15) # use these lambdas
## YOUR CODE HERE ##
return Rsq(y_test, y_test_hat)
# Now fit a model with each combination of feature spaces, and compute R^2!
# store the 7 R^2 values in all_rsqs
# use this order for the 7 models:
# 1. Just feature space 1
# 2. Just 2
# 3. Just 3
# 4. 1 & 2
# 5. 1 & 3
# 6. 2 & 3
# 7. All three (1 & 2 & 3)
all_rsqs = np.zeros(7)
## YOUR CODE HERE ##
# Now let's just look at the R^2 for each of the 7 models!
print all_rsqs
# Next, use the algebraic formulae to figure out the size of each partition
# A nice way to do this is to define a matrix V such that V.dot(all_rsqs) = partition_sizes
## YOUR CODE HERE ##
# partition_sizes should have the size of each partition (in the same units as R^2)
partition_sizes = ## YOUR CODE HERE ##
# let's compare your partition_sizes to true_variances, which is what they should be!
zip(partition_sizes, true_variances)
|
homeworks/homework_2/homework_2.ipynb
|
alexhuth/n4cs-fa2017
|
gpl-3.0
|
Modèle boite noire 02
L'idée est d'estimer les paramètres du modèle à partir de la mesure expérimentale de la température intérieure ($T$) et des données météo. On n'obtient pas forcement une valeur physique significative pour ces coefficients, mais cela permet d'estimer et de comparer les variations de jour en jour, et donc à priori de voir l'effet du comportement de l'habitant (ouverture des fenêtres et rideaux en été).
Le modèle 'boite-noire'
On souhaite le modèle le plus simple possible, avec seulement deux coefficients : la résistance thermique $h$ avec l'extérieure, une masse thermique $M$ et un flux de chaleur externe $\eta\Phi(t)$.
Schéma électrique équivalent :
et l'équation différentielle corespondante:
$$
\frac{dT}{dt} = \frac{h}{M} \,\left[ T_{ext}(t) - T \right] + \frac{\eta}{M} \, \Phi(t)
$$
$T(t)$ est la température à l'intérieure de l'appartement.
$T_{ext}(t)$ est la température extérieure donnée par la météo.
$\Phi(t)$ est le flux solaire (en Watt) sur les surfaces vitrées.
Il y a deux paramètres inconnues, tous les deux normés par la masse thermique $M$:
* $\eta$ qui correspond à l'absorption des rayons solaire, normalement entre 0 et 1.
* $h$ qui correspond à l'isolation avec l'air extérieur (en W/K).
$M$, enfin, est la masse thermique de l'appartement (en J/K) ... Elle détermine le temps de réponse caractérique du système, et est pour cette raison difficile à estimer parce que non unique.
Chargement des données
Données obtenues avec le notebook get_data_and_preprocess.ipynb.
|
df_full = pd.read_pickle( 'weatherdata.pck' )
df = df_full[['T_int', 'temperature', 'flux_tot', 'windSpeed']].copy()
|
BlackBoxModel02.ipynb
|
xdze2/thermique_appart
|
mit
|
On a l'enregistrement de la température intérieure ($T_{int}$) et la température extérieure ('temperature') :
|
df[['T_int', 'temperature']].plot( figsize=(14, 4) ); plt.ylabel('°C');
|
BlackBoxModel02.ipynb
|
xdze2/thermique_appart
|
mit
|
Et le flux solaire, calculé pour mon appartement, et projété suivant la surface et l'orientation de mes fenètres (Velux):
|
# Flux solaire sur les vitres:
df[['flux_tot']].plot( figsize=(14, 4) ); plt.ylabel('Watt');
|
BlackBoxModel02.ipynb
|
xdze2/thermique_appart
|
mit
|
Solveur équation diff. (ODE)
L'équation est résolue en temps avec odeint de scipy (doc, OdePack).
|
from scipy.integrate import odeint
def get_dTdt( T, t, params, get_Text, get_Phi ):
""" dérivé de la température p/r au temps
params : [ h/M , eta/M ]
get_Text, get_Phi: fonctions d'interpolation
"""
T_ext = get_Text( t )
phi = get_Phi( t )
dTdt = params[0] * ( T_ext - T ) + params[1] / 100 * phi
return 1e-6*dTdt
def apply_model( data, T_start, params, full_output=False ):
data_dict = data.to_dict(orient='list')
time_sec = data.index.astype(np.int64) // 1e9 # conversion en secondes
# construction des fonctions d'interpolations:
get_Text = lambda t: np.interp( t, time_sec, data_dict['temperature'] )
get_Phi = lambda t: np.interp( t, time_sec, data_dict['flux_tot'] )
T_theo = odeint(get_dTdt, T_start, time_sec, args=(params, get_Text, get_Phi ), \
full_output=full_output, h0=30*60)
# h0 : pas de temps initial utilisé par le solveur
return T_theo.flatten()
|
BlackBoxModel02.ipynb
|
xdze2/thermique_appart
|
mit
|
Rq: Facteur 100 et 1e-6 pour avoir des valeurs proches de l'unité, et le même ordre de grandeur entre $\Phi$ et $\Delta T$... pour l'optimisation
Test sur les données entières :
|
params = ( 3, 3 )
res = apply_model( df, 30, params )
plt.figure( figsize=(14, 4) )
plt.plot( res )
plt.plot( df['T_int'].as_matrix() ) ;
|
BlackBoxModel02.ipynb
|
xdze2/thermique_appart
|
mit
|
Estimation jour par jour
Les paramètres $\eta$ et $h$ ne sont en réalité pas constant. Ils dépendante de l'usage de l'appartement, principalement de l'ouverture des fenêtres et à la position des volets sur celle-ci. Ils sont donc fonctions de la période de la journée, et de la météo. L'idée est d'estimer leurs valeurs jour par jour.
mais :
* la nuit, $\Phi = 0$ donc $\eta$ est non déterminé
* le jour, $T_{ext}(t)$ est fortement corrélé avec $\Phi(t)$. Découpler les deux paramètres n'est alors pas évident.
Les estimations sont donc effectuées séparement : le jour pour $\eta$ (avec un $h_{min}$ résiduel), et la nuit pour $h$ (correspondant à la ventilation).
|
def get_errorfit( params, data, T_start ):
""" Calcul le modele pour les données et les paramètres
puis calcul l'erreur avec les données expérimentals (non NaN)
"""
T_exp = data['T_int'].as_matrix()
T_theo = apply_model( data, T_start, params )
delta = (T_exp - T_theo)**2
return np.sum( delta[ ~np.isnan( delta ) ] )
""" on distingue le fit suivant la nuit ou le jour
pour ne fiter qu'un seul paramètre
"""
from scipy.optimize import fminbound
from scipy.optimize import minimize
def fit_model_p1( data, T_start, param_0 ):
func0 = lambda x: get_errorfit( (param_0, x), data, T_start )
x1, x2 = (.1, 100)
param_1 = fminbound(func0, x1, x2, disp=0)
#param_1 = fmin(func0, 20)
return param_1
def fit_model_p0( data, T_start, param_1 ):
func1 = lambda x: get_errorfit( (x, param_1), data, T_start )
x1, x2 = (.1, 100)
param_0 = fminbound(func1, x1, x2, disp=0)
#param_0 = fmin(func1, 20)
return param_0
def fit_model_p01( data, T_start ):
func01 = lambda x: get_errorfit( x, data, T_start )
x12 = (2.3, 3.)
res = minimize(func01, x12, method='Powell')
#param_0 = fmin(func1, 20)
return res.x
|
BlackBoxModel02.ipynb
|
xdze2/thermique_appart
|
mit
|
Découpage en période jour / nuit
|
""" Estimation des périodes de jour et de nuit à partir du flux solaire
"""
df['isnight'] = ( df['flux_tot'] == 0 ).astype(int)
# Permet de numéroter les périodes :
nights_days = df['isnight'].diff().abs().cumsum()
nights_days[0] = 0
df_byday = df.groupby(nights_days)
Groupes = [ int( k ) for k in df_byday.groups.keys() ]
df_byday['temperature'].plot( figsize=(14, 3) );
df_byday['T_int'].plot( figsize=(14, 3) );
|
BlackBoxModel02.ipynb
|
xdze2/thermique_appart
|
mit
|
Ajustement période par période
|
def fit_a_day( data, T_zero ):
""" Estime le modèle sur les données 'data'
avec la température initial 'T_zero'
"""
# Gestion des données exp. manquante :
T_exp = data['T_int'].as_matrix()
nombre_nonNaN = T_exp.size - np.isnan( T_exp ).sum()
if nombre_nonNaN < 10:
# pas assez de donnée pour faire le fit
h, eta = np.nan, np.nan
res = np.full( T_exp.shape , np.nan)
else:
eta_night = 0
h_day = 2.3 # h_min, valeur minimal ? ... arbitraire pour le moment
if data['isnight'].all():
# nuit
h = fit_model_p0( data, T_zero, eta_night )
eta = eta_night
else:
# jour
h = h_day
#eta = fit_model_p1( data, T_zero, h_day )
h, eta = fit_model_p01( data, T_zero )
# Calcul le modèle avec les paramètres optenus:
res = apply_model( data, T_zero, (h, eta) )
return (h, eta), res
|
BlackBoxModel02.ipynb
|
xdze2/thermique_appart
|
mit
|
Tracé pour une période (debug)
|
len( Groupes )
data = df_byday.get_group( Groupes[1] )
T_int = data['T_int'].interpolate().as_matrix()
T_zero = T_int[ ~ np.isnan( T_int ) ][0]
params, res = fit_a_day( data, T_zero )
print( data.index[0] )
print( params )
plt.figure( figsize=(14, 5) )
plt.plot(data.index, res, '--', label='T_theo' )
plt.plot(data.index, T_int, label='T_int' );
plt.plot(data.index, data['temperature'].as_matrix() , label='T_ext' );
plt.plot(data.index, data['flux_tot'].as_matrix()/100 + 20, label='~ Flux' );
plt.legend();
|
BlackBoxModel02.ipynb
|
xdze2/thermique_appart
|
mit
|
Calcul pour toutes les périodes :
prends du temps
|
# init
df['T_theo'] = 0
df['eta_M'], df['h_M'] = 0, 0
# valeur initiale
T_zero = df['T_int'][ df['T_int'].first_valid_index() ]
for grp_id in Groupes:
print( '%i, ' % grp_id, end='' )
data_day = df_byday.get_group( grp_id )
# debug cas où aucun donnée exp. :
if np.isnan( T_zero ):
T_int = data_day['T_int']
if np.isnan( T_int ).all():
T_zero = 0
else:
T_zero = data_day['T_int'][ data_day['T_int'].first_valid_index() ]
# estimation
params, res = fit_a_day( data_day, T_zero )
# save
df.loc[ data_day.index, 'T_theo'] = res
df.loc[ data_day.index, 'eta_M'] = params[1]
df.loc[ data_day.index, 'h_M'] = params[0]
# valeur initiale pour l'étape suivante
T_zero = res[-1]
print('done')
df[['T_int', 'T_theo']].plot( figsize=(14, 5) );
df[['T_int', 'T_theo']].plot( figsize=(14, 5) );
df[['T_int', 'temperature', 'T_theo']].plot( figsize=(14, 5) );
df[['flux_tot']].plot( figsize=(14, 5) );
plt.figure( figsize=(14, 5) )
plt.subplot( 2, 1, 1 )
plt.plot( df[['h_M']] ); plt.ylabel('h_M');
plt.subplot( 2, 1, 2 )
plt.plot( df[['eta_M']], 'r' ); plt.ylabel('eta_M');
|
BlackBoxModel02.ipynb
|
xdze2/thermique_appart
|
mit
|
Ordres de grandeur
eta : c'est un pourcentage entre 0 et 100
h : Il y a deux contibutions:
- h_min : L'isolation globale du batiment (mur, toit et surtout fenêtres). Cette valeur doit être constante.
- h_aero : Les infiltrations d'air et aérations.
M : La masse thermique
M ~ 0.1e6 J/K ???
h_min correspond à l'isolation maximal de l'appart, toutes fenêtres fermés
h_max, à la ventillation maximal, toutes fenêtres ouvertes (+ vent ?)
eta_min correspond à tous les volets fermés
eta_max sans volet
Estimation de h_min
h_min = aire_vitre * U_vitrage + perimetre * U_cadres + aire_parois * h_parois
|
U_vitrage = 2.8 # W/m2/K, typiquement pour du double vitrage
U_cadres = 0.15 + 0.016 # W/m/K, pour un cadre en bois de section carré, c'est en fait la conductivité du bois
# + psi ...
aire_vitre = 0.6*0.8*2 + 1.2*0.8 + 0.3*0.72*4 + 0.25**2 # m2
perimetre = (0.6+0.8)*4 + (1.2+0.8)*2 + 2*(0.3+0.72)*4 + 4*0.25
aire_parois = 4.59*7.94*2 # m2
h_parois = 0.04 / 0.15 # W/m2/K - pour de la laine de roche
h_min = aire_vitre * U_vitrage + perimetre * U_cadres + aire_parois * h_parois
print('h_min : %f W/K' % h_min)
|
BlackBoxModel02.ipynb
|
xdze2/thermique_appart
|
mit
|
Résidus
|
R = df['T_int'] - df['T_theo']
R.plot( figsize=(14, 5), style='k' ); plt.ylabel('°C');
# Plot variation relative temp.
plt.figure( );
for grp_id in Groupes:
data = df_byday.get_group( grp_id )
if np.isnan( data['T_int'] ).all():
continue
T_int = data['T_int'].as_matrix()
Tmin, Tmax = T_int[ ~ np.isnan( T_int ) ].min(), T_int[ ~ np.isnan( T_int ) ].max()
if data['isnight'].all():
T_int = T_int - Tmax
else:
T_int = T_int - Tmin
plt.plot( T_int )
# Corrélation T_ext <-> Phi
from scipy.stats import pearsonr
norm = lambda X: (X - X.min())/(X.max() - X.min())
coeffs_cor, groupes_id = [], []
plt.figure( );
for grp_id in Groupes[2:]:
data = df_byday.get_group( grp_id )
if data['isnight'].all():
continue
T_ext = data['temperature'].as_matrix()
phi = data['flux_tot'].as_matrix()
#T_ext, phi = norm(T_ext), norm(phi)
plt.plot( T_ext, phi, '.' )
coeffs_cor.append( pearsonr(T_ext, phi)[0] )
groupes_id.append( grp_id )
#plt.axis('equal')
plt.plot( coeffs_cor ) ;
sorted( zip( groupes_id, coeffs_cor ), key=lambda x:x[1] )
data = df_byday.get_group( Groupes[13] )
T_int = data['T_int'].as_matrix()
T_ext = data['temperature'].as_matrix()
phi = data['flux_tot'].as_matrix()
plt.plot( norm(phi) )
plt.plot( norm(T_ext) );
|
BlackBoxModel02.ipynb
|
xdze2/thermique_appart
|
mit
|
3. Enter DV360 User Audit Recipe Parameters
DV360 only permits SERVICE accounts to access the user list API endpoint, be sure to provide and permission one.
Wait for BigQuery->->->DV_... to be created.
Wait for BigQuery->->->Barnacle_... to be created, then copy and connect the following data sources.
Join the StarThinker Assets Group to access the following assets
Copy Barnacle DV Report.
Click Edit->Resource->Manage added data sources, then edit each connection to connect to your new tables above.
Or give these intructions to the client.
Modify the values below for your use case, can be done multiple times, then click play.
|
FIELDS = {
'auth_read':'user', # Credentials used for writing data.
'auth_write':'service', # Credentials used for writing data.
'partner':'', # Partner ID to run user audit on.
'recipe_slug':'', # Name of Google BigQuery dataset to create.
}
print("Parameters Set To: %s" % FIELDS)
|
colabs/barnacle_dv360.ipynb
|
google/starthinker
|
apache-2.0
|
4. Execute DV360 User Audit
This does NOT need to be modified unless you are changing the recipe, click play.
|
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
TASKS = [
{
'dataset':{
'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Credentials used for writing data.'}},
'dataset':{'field':{'name':'recipe_slug','kind':'string','order':4,'default':'','description':'Name of Google BigQuery dataset to create.'}}
}
},
{
'google_api':{
'auth':{'field':{'name':'auth_read','kind':'authentication','order':0,'default':'user','description':'Credentials used for writing data.'}},
'api':'doubleclickbidmanager',
'version':'v1.1',
'function':'queries.listqueries',
'alias':'list',
'results':{
'bigquery':{
'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Credentials used for writing data.'}},
'dataset':{'field':{'name':'recipe_slug','kind':'string','order':4,'default':'','description':'Name of Google BigQuery dataset to create.'}},
'table':'DV_Reports'
}
}
}
},
{
'google_api':{
'auth':{'field':{'name':'auth_read','kind':'authentication','order':0,'default':'user','description':'Credentials used for writing data.'}},
'api':'displayvideo',
'version':'v1',
'function':'partners.list',
'kwargs':{
'fields':'partners.displayName,partners.partnerId,nextPageToken'
},
'results':{
'bigquery':{
'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Credentials used for writing data.'}},
'dataset':{'field':{'name':'recipe_slug','kind':'string','order':4,'default':'','description':'Name of Google BigQuery dataset to create.'}},
'table':'DV_Partners'
}
}
}
},
{
'google_api':{
'auth':{'field':{'name':'auth_read','kind':'authentication','order':0,'default':'user','description':'Credentials used for writing data.'}},
'api':'displayvideo',
'version':'v1',
'function':'advertisers.list',
'kwargs':{
'partnerId':{'field':{'name':'partner','kind':'integer','order':2,'default':'','description':'Partner ID to run user audit on.'}},
'fields':'advertisers.displayName,advertisers.advertiserId,nextPageToken'
},
'results':{
'bigquery':{
'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Credentials used for writing data.'}},
'dataset':{'field':{'name':'recipe_slug','kind':'string','order':4,'default':'','description':'Name of Google BigQuery dataset to create.'}},
'table':'DV_Advertisers'
}
}
}
},
{
'google_api':{
'auth':'service',
'api':'displayvideo',
'version':'v1',
'function':'users.list',
'kwargs':{
},
'results':{
'bigquery':{
'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Credentials used for writing data.'}},
'dataset':{'field':{'name':'recipe_slug','kind':'string','order':4,'default':'','description':'Name of Google BigQuery dataset to create.'}},
'table':'DV_Users'
}
}
}
},
{
'bigquery':{
'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Credentials used for writing data.'}},
'from':{
'query':"SELECT U.userId, U.name, U.email, U.displayName, REGEXP_EXTRACT(U.email, r'@(.+)') AS Domain, IF (ENDS_WITH(U.email, '.gserviceaccount.com'), 'Service', 'User') AS Authentication, IF((Select COUNT(advertiserId) from UNNEST(U.assignedUserRoles)) = 0, 'Partner', 'Advertiser') AS Scope, STRUCT( AUR.partnerId, P.displayName AS partnerName, AUR.userRole, AUR.advertiserId, A.displayName AS advertiserName, AUR.assignedUserRoleId ) AS assignedUserRoles, FROM `{dataset}.DV_Users` AS U, UNNEST(assignedUserRoles) AS AUR LEFT JOIN `{dataset}.DV_Partners` AS P ON AUR.partnerId=P.partnerId LEFT JOIN `{dataset}.DV_Advertisers` AS A ON AUR.advertiserId=A.advertiserId ",
'parameters':{
'dataset':{'field':{'name':'recipe_slug','kind':'string','order':4,'default':'','description':'Name of Google BigQuery dataset to create.'}}
},
'legacy':False
},
'to':{
'dataset':{'field':{'name':'recipe_slug','kind':'string','order':4,'default':'','description':'Name of Google BigQuery dataset to create.'}},
'view':'Barnacle_User_Roles'
}
}
},
{
'bigquery':{
'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Credentials used for writing data.'}},
'from':{
'query':"SELECT R.*, P.displayName AS partnerName, A.displayName AS advertiserName, FROM ( SELECT queryId, (SELECT CAST(value AS INT64) FROM UNNEST(R.params.filters) WHERE type = 'FILTER_PARTNER' LIMIT 1) AS partnerId, (SELECT CAST(value AS INT64) FROM UNNEST(R.params.filters) WHERE type = 'FILTER_ADVERTISER' LIMIT 1) AS advertiserId, R.schedule.frequency, R.params.metrics, R.params.type, R.metadata.dataRange, R.metadata.sendNotification, DATE(TIMESTAMP_MILLIS(R.metadata.latestReportRunTimeMS)) AS latestReportRunTime, FROM `{dataset}.DV_Reports` AS R) AS R LEFT JOIN `{dataset}.DV_Partners` AS P ON R.partnerId=P.partnerId LEFT JOIN `{dataset}.DV_Advertisers` AS A ON R.advertiserId=A.advertiserId ",
'parameters':{
'dataset':{'field':{'name':'recipe_slug','kind':'string','order':4,'default':'','description':'Name of Google BigQuery dataset to create.'}}
},
'legacy':False
},
'to':{
'dataset':{'field':{'name':'recipe_slug','kind':'string','order':4,'default':'','description':'Name of Google BigQuery dataset to create.'}},
'view':'Barnacle_Reports'
}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(CONFIG, TASKS, force=True)
|
colabs/barnacle_dv360.ipynb
|
google/starthinker
|
apache-2.0
|
We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input.
Exercise: Build the graph for the autoencoder in the cell below. The input images will be flattened into 784 length vectors. The targets are the same as the inputs. And there should be one hidden layer with a ReLU activation and an output layer with a sigmoid activation. The loss should be calculated with the cross-entropy loss, there is a convenient TensorFlow function for this tf.nn.sigmoid_cross_entropy_with_logits (documentation). You should note that tf.nn.sigmoid_cross_entropy_with_logits takes the logits, but to get the reconstructed images you'll need to pass the logits through the sigmoid function.
|
# mnist.train.images[0]
# Size of the encoding layer (the hidden layer)
encoding_dim = 32 # feel free to change this value
image_shape = mnist.train.images.shape[1]
inputs_ = tf.placeholder(tf.float32,shape=(None,image_shape),name='inputs')
targets_ = tf.placeholder(tf.float32,shape=(None,image_shape),name='targets')
# Output of hidden layer
encoded = tf.layers.dense(inputs_,encoding_dim,activation=tf.nn.relu)
# Output layer logits
logits = tf.layers.dense(encoded,image_shape) # linear activation
# Sigmoid output from logits
decoded = tf.sigmoid(logits)
# Sigmoid cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(logits=logits,labels=targets_)
# Mean of the loss
cost = tf.reduce_mean(loss)
# Adam optimizer
opt = tf.train.AdamOptimizer(0.001).minimize(cost)
|
autoencoder/Simple_Autoencoder.ipynb
|
zhuanxuhit/deep-learning
|
mit
|
Define an op to dequeue a line from file:
|
reader = tf.TextLineReader()
key_op, value_op = reader.read(filename_queue)
|
ch02_basics/Concept10_queue_text.ipynb
|
BinRoot/TensorFlow-Book
|
mit
|
Start all queue runners collected in the graph:
|
sess = tf.InteractiveSession()
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
|
ch02_basics/Concept10_queue_text.ipynb
|
BinRoot/TensorFlow-Book
|
mit
|
Try reading lines from the file by dequeuing:
|
for i in range(100):
key, value = sess.run([key_op, value_op])
print(key, value)
|
ch02_basics/Concept10_queue_text.ipynb
|
BinRoot/TensorFlow-Book
|
mit
|
和之前的章节不一样,关闭沙盒数据接口,使用实时数据源提供数据:
|
abupy.env.disable_example_env_ipython()
|
abupy_lecture/19-数据源(ABU量化使用文档).ipynb
|
bbfamily/abu
|
gpl-3.0
|
1. 数据模式的切换
使用g_data_fetch_mode可以查看当前的数据模式:
|
abupy.env.g_data_fetch_mode
|
abupy_lecture/19-数据源(ABU量化使用文档).ipynb
|
bbfamily/abu
|
gpl-3.0
|
默认使用模式为E_DATA_FETCH_NORMAL,NORMAL的意义是默认优先从缓存中获取,如果缓存中不存在,再访问网络,尝试从网络获取,除此之外还有一些优化,比如虽然缓存中的数据也无法满足要求,但是缓存索引纪录今天已经尝试从网络获取,这种情况下也不再访问网络。
更多详情请阅读ABuDataSource代码
E_DATA_FETCH_FORCE_NET为强制使用网络进行数据更新,一般不推荐使用,如果切换了数据源,或者缓存中的数据存在问题的情况下会使用:
|
abupy.env.g_data_fetch_mode = EMarketDataFetchMode.E_DATA_FETCH_FORCE_NET
ABuSymbolPd.make_kl_df('usBIDU').tail()
|
abupy_lecture/19-数据源(ABU量化使用文档).ipynb
|
bbfamily/abu
|
gpl-3.0
|
E_DATA_FETCH_FORCE_LOCAL为强制从缓存获取,实际上在做回测的时候,使用的一般都是这种模式,因为比如编写了一个策略进行回测结果度量,通常情况下需要反复的修改策略,重新进行回测,强制使用缓存的好处是:
保证使用的数据集没有发生变化,度量结果有可比性
提高回测运行效率,特别是针对全市场回测
分类数据获取和回测,方便问题排除
|
abupy.env.g_data_fetch_mode = EMarketDataFetchMode.E_DATA_FETCH_FORCE_LOCAL
ABuSymbolPd.make_kl_df('usBIDU').tail(1)
|
abupy_lecture/19-数据源(ABU量化使用文档).ipynb
|
bbfamily/abu
|
gpl-3.0
|
下面把数据获取模式恢复为默认的E_DATA_FETCH_NORMAL:
|
abupy.env.g_data_fetch_mode = EMarketDataFetchMode.E_DATA_FETCH_NORMAL
|
abupy_lecture/19-数据源(ABU量化使用文档).ipynb
|
bbfamily/abu
|
gpl-3.0
|
2. 数据存储的切换
默认的缓存数据存储模式为CSV,如下所示:
|
abupy.env.g_data_cache_type
|
abupy_lecture/19-数据源(ABU量化使用文档).ipynb
|
bbfamily/abu
|
gpl-3.0
|
缓存的csv数据存贮文件路径在~/abu/data/csv/,可使运行下面命令直接打开目录:
|
if abupy.env.g_is_mac_os:
!open $abupy.env.g_project_kl_df_data_csv
else:
!echo $abupy.env.g_project_kl_df_data_csv
|
abupy_lecture/19-数据源(ABU量化使用文档).ipynb
|
bbfamily/abu
|
gpl-3.0
|
可以通过g_data_cache_type切换其它存贮模式,如下使用HDF5进行数据存贮:
|
abupy.env.g_data_cache_type = EDataCacheType.E_DATA_CACHE_HDF5
|
abupy_lecture/19-数据源(ABU量化使用文档).ipynb
|
bbfamily/abu
|
gpl-3.0
|
缓存的hdf5文件路径在~/abu/data/df_kl.h5,可使运行下面命令直接打开目录, 或者显示完整路径:
|
if abupy.env.g_is_mac_os:
!open $abupy.env.g_project_data_dir
else:
!echo $abupy.env.g_project_data_dir
ABuSymbolPd.make_kl_df('usTSLA').tail()
|
abupy_lecture/19-数据源(ABU量化使用文档).ipynb
|
bbfamily/abu
|
gpl-3.0
|
下面仍然切换回默认的csv模式,csv的优点是存贮空间需要小,可以并行读写,且针对不同平台兼容性好:
|
abupy.env.g_data_cache_type = EDataCacheType.E_DATA_CACHE_CSV
|
abupy_lecture/19-数据源(ABU量化使用文档).ipynb
|
bbfamily/abu
|
gpl-3.0
|
3. 数据源的切换
如下显示当前的数据源g_market_source为百度数据源:
|
abupy.env.g_market_source
|
abupy_lecture/19-数据源(ABU量化使用文档).ipynb
|
bbfamily/abu
|
gpl-3.0
|
如下想要获取比特币的数据,但是输出显示百度的数据源不支持比特币:
|
ABuSymbolPd.make_kl_df('btc')
|
abupy_lecture/19-数据源(ABU量化使用文档).ipynb
|
bbfamily/abu
|
gpl-3.0
|
切换数据源为火币数据源,即可正常获取数据,如下:
|
abupy.env.g_market_source = EMarketSourceType.E_MARKET_SOURCE_hb_tc
ABuSymbolPd.make_kl_df('btc').tail()
|
abupy_lecture/19-数据源(ABU量化使用文档).ipynb
|
bbfamily/abu
|
gpl-3.0
|
类似,下面想要获取期货鸡蛋的数据,但是输出显示火币的数据源不支持期货市场:
|
ABuSymbolPd.make_kl_df('jd0')
|
abupy_lecture/19-数据源(ABU量化使用文档).ipynb
|
bbfamily/abu
|
gpl-3.0
|
切换数据源为新浪期货数据源,即可正常获取数据,如下:
|
abupy.env.g_market_source = EMarketSourceType.E_MARKET_SOURCE_sn_futures
ABuSymbolPd.make_kl_df('jd0').tail()
|
abupy_lecture/19-数据源(ABU量化使用文档).ipynb
|
bbfamily/abu
|
gpl-3.0
|
4. 全市场数据的更新
在之前的章节使用abu.run_loop_back进行交易回测,使用的都是沙盒数据,如果需要使用实时数据,特别是回测交易多的情况下,比如全市场测试,推荐在使用abu.run_loop_back进行回测前先使用abu.run_kl_update将数据进行更新。
在run_kl_update中会首先强制使用网络数据进行全市场数据更新,在更新完毕后会将全市场的交易数据都写入缓存,再abu.run_loop_back运行回测的时候使用本地数据模式,即实现数据更新与策略回测分离,运行效率提高。
下面的代码将分别获取美股,A股,港股,期货,比特币,莱特币6年的交易数据,在后面的章节将分别使用这些数据做回测示例,读者可只获取自己关心的市场的交易数据,不必全部运行。
所有获取的数据已经存放在百度云盘上,后面的章节使用的数据都是本节更新的数据,建议直接从云盘下载入库完毕的数据库,不需要从各个数据源再一个一个的下载数据进行入库,百度云地址如下:
csv格式美股,A股,港股,币类,期货6年日k数据 密码: gvtr
下面数据存贮格式为hdf5数据,由于hdf5文件解压后非常大,还需要区分python版本,所以如果没有足够的存贮空间
特别是python2下,建议使用csv格式的缓存文件:
mac系统python3 美股,A股,港股,币类,期货6年日k数据 密码: ecyp
mac系统python2 A股6年日k数据: 密码: sid8
mac系统python2 美股6年日k数据: 密码: uaww
windows python3 美股,A股,港股,币类,期货6年日k数据 密码: 3cwe
windows python2 A股6年日k数据: 密码: 78mb
windows python2 美股6年日k数据: 密码: 63r3
下载完毕上述数据后,hdf5解压得到df_kl.h5文件,csv解压得到csv文件夹,解压后放到下面路径下即可
|
if abupy.env.g_is_mac_os:
!open $abupy.env.g_project_data_dir
else:
!echo $abupy.env.g_project_data_dir
|
abupy_lecture/19-数据源(ABU量化使用文档).ipynb
|
bbfamily/abu
|
gpl-3.0
|
如果不想通过直接下载数据文件的方式,也可下面通过切换至腾讯数据源,然后进行美股数据全市场更新:
备注:耗时操作,大概需要运行15分钟左右,可以在做其它事情的时候运行
|
%%time
abupy.env.g_market_source = EMarketSourceType.E_MARKET_SOURCE_tx
abupy.env.g_data_cache_type = EDataCacheType.E_DATA_CACHE_CSV
abu.run_kl_update(start='2011-08-08', end='2017-08-08', market=EMarketTargetType.E_MARKET_TARGET_US, n_jobs=10)
|
abupy_lecture/19-数据源(ABU量化使用文档).ipynb
|
bbfamily/abu
|
gpl-3.0
|
如果不想通过直接下载数据文件的方式,也可切换至百度数据源,然后进行A股数据全市场更新:
备注:耗时操作,大概需要运行20分钟左右,可以在做其它事情的时候运行
|
%%time
abupy.env.g_market_source = EMarketSourceType.E_MARKET_SOURCE_bd
abupy.env.g_data_cache_type = EDataCacheType.E_DATA_CACHE_CSV
abu.run_kl_update(start='2011-08-08', end='2017-08-08', market=EMarketTargetType.E_MARKET_TARGET_CN, n_jobs=10)
|
abupy_lecture/19-数据源(ABU量化使用文档).ipynb
|
bbfamily/abu
|
gpl-3.0
|
如果不想通过直接下载数据文件的方式,也可切换至网易数据源,然后进行港股数据全市场更新:
备注:耗时操作,大概需要运行5分钟左右,可以在做其它事情的时候运行
|
%%time
abupy.env.g_market_source = EMarketSourceType.E_MARKET_SOURCE_nt
abupy.env.g_data_cache_type = EDataCacheType.E_DATA_CACHE_CSV
abu.run_kl_update(start='2011-08-08', end='2017-08-08', market=EMarketTargetType.E_MARKET_TARGET_HK, n_jobs=10)
|
abupy_lecture/19-数据源(ABU量化使用文档).ipynb
|
bbfamily/abu
|
gpl-3.0
|
切换至新浪期货数据源,然后进行期货数据全市场更新:
备注:非耗时操作,大概30秒
|
%%time
abupy.env.g_market_source = EMarketSourceType.E_MARKET_SOURCE_sn_futures
abupy.env.g_data_cache_type = EDataCacheType.E_DATA_CACHE_CSV
abu.run_kl_update(start='2011-08-08', end='2017-08-08', market=EMarketTargetType.E_MARKET_TARGET_FUTURES_CN, n_jobs=4)
|
abupy_lecture/19-数据源(ABU量化使用文档).ipynb
|
bbfamily/abu
|
gpl-3.0
|
切换至火币数据源,然后进行比特币,莱特币数据全市场更新:
备注:非耗时操作,大概需要5秒
|
%%time
abupy.env.g_market_source = EMarketSourceType.E_MARKET_SOURCE_hb_tc
abupy.env.g_data_cache_type = EDataCacheType.E_DATA_CACHE_CSV
abu.run_kl_update(start='2011-08-08', end='2017-08-08', market=EMarketTargetType.E_MARKET_TARGET_TC, n_jobs=2)
|
abupy_lecture/19-数据源(ABU量化使用文档).ipynb
|
bbfamily/abu
|
gpl-3.0
|
5. 接入外部数据源,股票数据源
abupy中内置的数据源:
美股市场:腾讯数据源,百度数据源,网易数据源,新浪数据源
港股市场:腾讯数据源,百度数据源,网易数据源
A股市场: 腾讯数据源,百度数据源,网易数据源
期货市场:新浪期货数据源,新浪国际期货数据源
比特币,莱特币:火币网数据源
这些数据源都只是为用户学习使用,并不能保证数据一直通畅,而且如果用户很在乎数据质量,比如有些数据源会有前复权数据错误问题,有些数据源成交量不准确等问题,那么就需要接入用户自己的数据源。
下面首先示例接入股票类型的数据源,首先实现一个数据源返回数据解析类,如下所示:
|
@AbuDataParseWrap()
class SNUSParser(object):
"""snus数据源解析类,被类装饰器AbuDataParseWrap装饰"""
def __init__(self, symbol, json_dict):
"""
:param symbol: 请求的symbol str对象
:param json_dict: 请求返回的json数据
"""
data = json_dict
# 为AbuDataParseWrap准备类必须的属性序列
if len(data) > 0:
# 时间日期序列
self.date = [item['d'] for item in data]
# 开盘价格序列
self.open = [item['o'] for item in data]
# 收盘价格序列
self.close = [item['c'] for item in data]
# 最高价格序列
self.high = [item['h'] for item in data]
# 最低价格序列
self.low = [item['l'] for item in data]
# 成交量序列
self.volume = [item['v'] for item in data]
|
abupy_lecture/19-数据源(ABU量化使用文档).ipynb
|
bbfamily/abu
|
gpl-3.0
|
上面编写的SNUSParser即为一个数据源返回数据解析类:
数据源解析类需要被类装饰器AbuDataParseWrap装饰
完成__init__函数,根据这里的数据json_dict来拆分成self.date,self.open,self.close,self.high,self.low,self.volume
如本例中每一条json数据的格式为:
{'d': '2017-08-08', 'o': '102.29', 'h': '102.35', 'l': '99.16', 'c': '100.07', 'v': '1834706'}
init函数目的就是通过拆解网络原始数据形成上述的五个基本序列,之后在类装饰器AbuDataParseWrap中会进行数据的再次加工以及规范标准化处理
更多详情请阅读源代码ABuDataParser
备注:这里拆解的过程没有在乎效率,只为好理解过程
在编写数据解析类后就需要编写个数据源类,如下所示,以新浪美股数据源为例:
|
class SNUSApi(StockBaseMarket, SupportMixin):
"""snus数据源,支持美股"""
K_NET_BASE = "http://stock.finance.sina.com.cn/usstock/api/json_v2.php/US_MinKService.getDailyK?" \
"symbol=%s&___qn=3n"
def __init__(self, symbol):
"""
:param symbol: Symbol类型对象
"""
super(SNUSApi, self).__init__(symbol)
# 设置数据源解析对象类
self.data_parser_cls = SNUSParser
def _support_market(self):
"""声明数据源支持美股"""
return [EMarketTargetType.E_MARKET_TARGET_US]
def kline(self, n_folds=2, start=None, end=None):
"""日k线接口"""
url = SNUSApi.K_NET_BASE % self._symbol.symbol_code
data = ABuNetWork.get(url=url, timeout=(10, 60)).json()
kl_df = self.data_parser_cls(self._symbol, data).df
if kl_df is None:
return None
return StockBaseMarket._fix_kline_pd(kl_df, n_folds, start, end)
def minute(self, n_fold=5, *args, **kwargs):
"""分钟k线接口"""
raise NotImplementedError('SNUSApi minute NotImplementedError!')
|
abupy_lecture/19-数据源(ABU量化使用文档).ipynb
|
bbfamily/abu
|
gpl-3.0
|
上面编写的SNUSApi即为一个股票数据源类:
股票类型数据源类需要继承StockBaseMarket
__init__函数中指定数据源解析类
数据源类需要混入SupportMixin类,实现_support_market方法,声明支持的市场,本例只支持美股市场
数据源类需要实现kline接口,完成获取指定symbol的日线数据,将日线数据交给数据源解析类进行处理
数据源类需要实现分钟k线接口,也可以直接raise NotImplementedError
下面示例使用,如下通过check_support检测是否支持A股,结果显示False:
|
SNUSApi(code_to_symbol('sh601766')).check_support(rs=False)
|
abupy_lecture/19-数据源(ABU量化使用文档).ipynb
|
bbfamily/abu
|
gpl-3.0
|
如下通过check_support检测是否支持美股,结果显示True:
|
SNUSApi(code_to_symbol('usSINA')).check_support(rs=False)
|
abupy_lecture/19-数据源(ABU量化使用文档).ipynb
|
bbfamily/abu
|
gpl-3.0
|
如下通过kline接口获取数据,如下所示:
|
SNUSApi(code_to_symbol('usSINA')).kline().tail()
|
abupy_lecture/19-数据源(ABU量化使用文档).ipynb
|
bbfamily/abu
|
gpl-3.0
|
接入SNUSApi至abupy系统中,只需要将数据源类名称直接赋予abupy.env.g_private_data_source:
|
abupy.env.g_private_data_source = SNUSApi
|
abupy_lecture/19-数据源(ABU量化使用文档).ipynb
|
bbfamily/abu
|
gpl-3.0
|
下面使用make_kl_df接口获取sh601766数据,显示SNUSApi不支持:
|
ABuSymbolPd.make_kl_df('sh601766')
|
abupy_lecture/19-数据源(ABU量化使用文档).ipynb
|
bbfamily/abu
|
gpl-3.0
|
下面使用make_kl_df接口获取美股数据, 返回的数据即是通过上面实现的SNUSApi返回的:
|
ABuSymbolPd.make_kl_df('usSINA').tail()
|
abupy_lecture/19-数据源(ABU量化使用文档).ipynb
|
bbfamily/abu
|
gpl-3.0
|
6. 接入外部数据源,期货数据源
下面示例接入期货类型的数据源,首先实现一个数据源返回数据解析类,如下所示:
|
@AbuDataParseWrap()
class SNFuturesParser(object):
"""示例期货数据源解析类,被类装饰器AbuDataParseWrap装饰"""
# noinspection PyUnusedLocal
def __init__(self, symbol, json_dict):
"""
:param symbol: 请求的symbol str对象
:param json_dict: 请求返回的json数据
"""
data = json_dict
# 为AbuDataParseWrap准备类必须的属性序列
if len(data) > 0:
# 时间日期序列
self.date = [item[0] for item in data]
# 开盘价格序列
self.open = [item[1] for item in data]
# 最高价格序列
self.high = [item[2] for item in data]
# 最低价格序列
self.low = [item[3] for item in data]
# 收盘价格序列
self.close = [item[4] for item in data]
# 成交量序列
self.volume = [item[5] for item in data]
|
abupy_lecture/19-数据源(ABU量化使用文档).ipynb
|
bbfamily/abu
|
gpl-3.0
|
上面编写的SNFuturesParser与SNUSParser基本相同:被类装饰器AbuDataParseWrap装饰,实现__init__函数
如本例中每一条json数据的格式为, 所以通过序号解析对应的值:
['2017-08-08', '4295.000', '4358.000', '4281.000', '4345.000', '175570']
在编写数据解析类后就需要编写个数据源类,如下所示,以新浪期货数据源为例:
|
class SNFuturesApi(FuturesBaseMarket, SupportMixin):
"""sn futures数据源,支持国内期货"""
K_NET_BASE = "http://stock.finance.sina.com.cn/futures/api/json_v2.php/" \
"IndexService.getInnerFuturesDailyKLine?symbol=%s"
def __init__(self, symbol):
"""
:param symbol: Symbol类型对象
"""
super(SNFuturesApi, self).__init__(symbol)
# 设置数据源解析对象类
self.data_parser_cls = SNFuturesParser
def _support_market(self):
"""声明数据源支持期货数据"""
return [EMarketTargetType.E_MARKET_TARGET_FUTURES_CN]
def kline(self, n_folds=2, start=None, end=None):
"""日k线接口"""
url = SNFuturesApi.K_NET_BASE % self._symbol.symbol_code
data = ABuNetWork.get(url=url, timeout=(10, 60)).json()
kl_df = self.data_parser_cls(self._symbol, data).df
if kl_df is None:
return None
return FuturesBaseMarket._fix_kline_pd(kl_df, n_folds, start, end)
|
abupy_lecture/19-数据源(ABU量化使用文档).ipynb
|
bbfamily/abu
|
gpl-3.0
|
上面编写的SNFuturesApi即为一个期货数据源类:
期货类型数据源类需要继承FuturesBaseMarket
__init__函数中指定数据源解析类
数据源类需要混入SupportMixin类,实现_support_market方法,声明支持的市场,本例只支持期货市场
数据源类需要实现kline接口,完成获取指定symbol的日线数据,将日线数据交给数据源解析类进行处理
下面示例使用,如下通过check_support检测是否支持美股,结果显示False:
|
SNFuturesApi(code_to_symbol('usSINA')).check_support(rs=False)
|
abupy_lecture/19-数据源(ABU量化使用文档).ipynb
|
bbfamily/abu
|
gpl-3.0
|
如下通过check_support检测是否支持期货,结果显示True:
|
SNFuturesApi(code_to_symbol('jd0')).check_support(rs=False)
|
abupy_lecture/19-数据源(ABU量化使用文档).ipynb
|
bbfamily/abu
|
gpl-3.0
|
接入SNFuturesApi至abupy系统中, 使用make_kl_df接口获取期货鸡蛋连续数据:
|
abupy.env.g_private_data_source = SNFuturesApi
ABuSymbolPd.make_kl_df('jd0').tail()
|
abupy_lecture/19-数据源(ABU量化使用文档).ipynb
|
bbfamily/abu
|
gpl-3.0
|
与期货市场类似的是美股期权市场,abupy同样支持美股期权市场的回测分析等操作,但由于暂时没有合适的可对外的数据源提供,所以暂时无示例,用户也可以在abupy中接入自己的美股期权数据源。
7. 接入外部数据源,比特币,莱特币数据源
下面示例接入币类市场数据源,首先实现一个数据源返回数据解析类,如下所示:
|
@AbuDataParseWrap()
class HBTCParser(object):
"""示例币类市场数据源解析类,被类装饰器AbuDataParseWrap装饰"""
def __init__(self, symbol, json_dict):
"""
:param symbol: 请求的symbol str对象
:param json_dict: 请求返回的json数据
"""
data = json_dict
# 为AbuDataParseWrap准备类必须的属性序列
if len(data) > 0:
# 时间日期序列
self.date = [item[0] for item in data]
# 开盘价格序列
self.open = [item[1] for item in data]
# 最高价格序列
self.high = [item[2] for item in data]
# 最低价格序列
self.low = [item[3] for item in data]
# 收盘价格序列
self.close = [item[4] for item in data]
# 成交量序列
self.volume = [item[5] for item in data]
# 时间日期进行格式转化,转化为如2017-07-26格式字符串
self.date = list(map(lambda date: ABuDateUtil.fmt_date(date), self.date))
|
abupy_lecture/19-数据源(ABU量化使用文档).ipynb
|
bbfamily/abu
|
gpl-3.0
|
上面编写的HBTCParser与上面的数据解析类基本相同:被类装饰器AbuDataParseWrap装饰,实现__init__函数
如本例中每一条json数据的格式为, 所以通过序号解析对应的值:
['20170809000000000', 22588.08, 23149.99, 22250.0, 22730.0, 7425.5134]
所以需要使用ABuDateUtil.fmt_date将时间进行格式转化,下面编写对应的数据源类,如下所示,以火币数据源为例:
|
class HBApi(TCBaseMarket, SupportMixin):
"""hb数据源,支持币类,比特币,莱特币"""
K_NET_BASE = 'https://www.huobi.com/qt/staticmarket/%s_kline_100_json.js?length=%d'
def __init__(self, symbol):
"""
:param symbol: Symbol类型对象
"""
super(HBApi, self).__init__(symbol)
# 设置数据源解析对象类
self.data_parser_cls = HBTCParser
def _support_market(self):
"""只支持币类市场"""
return [EMarketTargetType.E_MARKET_TARGET_TC]
def kline(self, n_folds=2, start=None, end=None):
"""日k线接口"""
req_cnt = n_folds * 365
if start is not None and end is not None:
# 向上取整数,下面使用_fix_kline_pd再次进行剪裁, 要使用current_str_date不能是end
folds = math.ceil(ABuDateUtil.diff(ABuDateUtil.date_str_to_int(start),
ABuDateUtil.current_str_date()) / 365)
req_cnt = folds * 365
url = HBApi.K_NET_BASE % (self._symbol.symbol_code, req_cnt)
data = ABuNetWork.get(url=url, timeout=(10, 60)).json()
kl_df = self.data_parser_cls(self._symbol, data).df
if kl_df is None:
return None
return TCBaseMarket._fix_kline_pd(kl_df, n_folds, start, end)
def minute(self, *args, **kwargs):
"""分钟k线接口"""
raise NotImplementedError('HBApi minute NotImplementedError!')
|
abupy_lecture/19-数据源(ABU量化使用文档).ipynb
|
bbfamily/abu
|
gpl-3.0
|
上面编写的HBApi即为一个支持比特币,莱特币数据源类:
期货类型数据源类需要继承TCBaseMarket
__init__函数中指定数据源解析类
数据源类需要混入SupportMixin类,实现_support_market方法,声明支持的市场,本例只支持币类市场
数据源类需要实现kline接口,完成获取指定symbol的日线数据,将日线数据交给数据源解析类进行处理
数据源类需要实现分钟k线接口minute,也可raise NotImplementedError
下面示例使用,如下通过check_support检测是否支持美股,结果显示False:
|
HBApi(code_to_symbol('usSINA')).check_support(rs=False)
|
abupy_lecture/19-数据源(ABU量化使用文档).ipynb
|
bbfamily/abu
|
gpl-3.0
|
如下通过check_support检测是否支持比特币,结果显示True:
|
HBApi(code_to_symbol('btc')).check_support(rs=False)
|
abupy_lecture/19-数据源(ABU量化使用文档).ipynb
|
bbfamily/abu
|
gpl-3.0
|
接入HBApi至abupy系统中, 使用make_kl_df接口获取期货比特币数据:
|
abupy.env.g_private_data_source = HBApi
ABuSymbolPd.make_kl_df('btc').tail()
|
abupy_lecture/19-数据源(ABU量化使用文档).ipynb
|
bbfamily/abu
|
gpl-3.0
|
1.2.2 Vowel harmony (10 points)
Also handle vowel harmony. Write a function that traverses the tree manually (similarly to exercise 2.4 in the lab) and returns True or False, depending on whether the tree conforms to vowel harmony rules. Use this function in parse_tree (and parse) to filter invalid trees.
|
# Tests
assert parser.parse('legfinomabbak') == 'leg[/Supl]finom[/Adj]abb[_Comp/Adj]ak[Pl]'
assert parser.parse('legfinomabbek') == None
|
homeworks/homework3/homework3.ipynb
|
bmeaut/python_nlp_2017_fall
|
mit
|
Exercise 2: Syntax (55 points)
In this exercise, you will parse a treebank, and induce a PCFG grammar from it. You will then implement a probabilistic version of the CKY algorithm, and evaluate the grammar on the test split of the treebank.
2.1 Parse a treebank (10 points)
Parse the treebank file en_lines-ud-train.s in the notebook's directory. Write a generator function that reads the file and yields nltk.tree.Tree objects. In particular,
- do not read the whole file into memory
- the Tree.fromstring() function converts an s-expression into a tree
Open the file in an editor to see the formatting.
Note that the file was created by parsing the LinES dependency corpus with Stanford CoreNLP, so it is not a gold standard by any means, but it will suffice for now.
|
from nltk.tree import Tree
def parse_treebank(treebank_file):
pass
# Tests
assert sum(1 for _ in parse_treebank('en_lines-ud-train.s')) == 2613
assert isinstance(next(parse_treebank('en_lines-ud-train.s')), Tree)
|
homeworks/homework3/homework3.ipynb
|
bmeaut/python_nlp_2017_fall
|
mit
|
2.2 Filter trees (5 points)
In order to avoid problems further down the line, we shall only handle a subset of the trees in the treebank. We call a tree valid, if
- its root is 'S'
- the root has at least two children.
Write a function that returns True for "valid" trees and False for invalid ones. Filter the your generator with it.
|
def is_tree_valid(tree):
pass
# Tests
assert sum(map(is_tree_valid, parse_treebank('en_lines-ud-train.s'))) == 2311
|
homeworks/homework3/homework3.ipynb
|
bmeaut/python_nlp_2017_fall
|
mit
|
2.3 Induce the PCFG grammar (10 points)
Now that you have the trees, it is time to induce (train) a PCFG grammar for it! Luckily, nltk has a functions for just that: nltk.grammar.induce_pcfg. Use it to acquire your PCFG grammar. You can find hints at how to use it in the grammar module.
Note: since we want to parse sentences with the PCKY algorithm, we need our grammar to be in CNF. Unfortunately, nlkt cannot convert a grammar to CNF, so you have to ensure that the trees are in CNF before feeding them to the PCFG induction function. That way, we can be sure that our grammar will be also. There are two functions that ensure a tree is in CNF:
- collapse_unary. Make sure you call it with collapsePOS=True!
- chomsky_normal_form. Do not use any smoothing.
|
def train_grammar(trees):
pass
def is_grammar_cnf(grammar):
for prod in grammar.productions():
rhs = prod.rhs()
if len(rhs) > 2 or (len(rhs) == 1 and isinstance(rhs[0], nltk.Nonterminal)):
return False
return True
# Tests
grammar = train_grammar(filter(is_tree_valid, parse_treebank('en_lines-ud-train.s')))
assert len(grammar.productions()) == 15000
assert is_grammar_cnf(grammar)
|
homeworks/homework3/homework3.ipynb
|
bmeaut/python_nlp_2017_fall
|
mit
|
2.4 Implement PCKY (15 points)
Implement the PCKY algorithm. Encapsulate it in a class called PCKYParser. Extend your CKYParser solution from the lab so that it creates trees with probabilities (ProbabilisticTree). The parse() method should also accept a parameter n, and only return the most probable n trees (as a generator).
Some pointers:
- ProbabilisticTree, which inherits from
- ProbabilisticMixIn
2.5 Evaluate the grammar (15 points)
Evaluate your grammar on the test split of the treebank (en_lines-ud-dev.s). Implement the unlabelled PARSEVAL metric. See the first answer for an example.
Exercise 3: Bonus* (20 points)
Implement a class that converts Python-style regular expressions to XSLT-style ones, and executes them via foma.
3.1 Conversion* (10 points)
The functionality should be encapsulated in a class called FomaRegex. The public API specification is as follows:
- its constructor should accept a valid Python regex string (not a regex object), convert it to the XFST format and store it in its pattern member field
- it should have a convert static method that does the pattern conversion. You can use pure Python or better yet, a CFG grammar
- the class should implement the context manager protocol:
- when entering the context, an FSA file should be created via foma and its name stored in the fsa_file field. The regex <regex> ; command can be used to compile a regex in foma; for the rest, refer to the compile_lexc() function
- after the context closes, the FSA file should be deleted and the fsa_file member set to None
You only need to account for the first six rows in the table comparing the two syntaxes. Additionally, you only need to cover the characters a-zA-Z0-9 (i.e. no punctuation). Note that there are two options for verbatim texts in XFST: [a b c] or {abc}. You are encouraged to use the latter; should you choose to use the former, update the assert statements accordingly.
You don't have to worry about applying the regex at this point.
|
import os
class FomaRegex:
pass
# Tests
assert FomaRegex.convert('ab?c*d+') == '{a}{b}^<2{c}*{d}+'
assert FomaRegex.convert('a.b') == '{a}?{b}'
assert FomaRegex.convert('a+(bc|de).*') == '{a}+[{bc}|{de}]?*'
with FomaRegex('a.b') as fr:
assert fr.pattern == '{a}?{b}', 'Invalid pattern'
assert fr.fsa_file is not None, 'FSA file is None in with'
fsa_file = fr.fsa_file
assert fr.fsa_file is None, 'FSA file is not None after with'
assert not os.path.isfile(fsa_file), 'FSA file still exists after with'
|
homeworks/homework3/homework3.ipynb
|
bmeaut/python_nlp_2017_fall
|
mit
|
3.2 Application* (5 points)
Add a match method to the class that runs the regex against the specified string. It should return True or False depending on whether the regex matched the string.
Note: obviously you should use your FSA file and foma, not the re module. :)
|
# Tests
with FomaRegex('a*(bc|de).+') as fr:
assert fr.match('aabcd') is True
assert fr.match('ade') is False
|
homeworks/homework3/homework3.ipynb
|
bmeaut/python_nlp_2017_fall
|
mit
|
3.3 Multiple regexes (5 points)
Make sure not all FomaRegex objects use the same FSA file.
|
# Tests
with FomaRegex('a') as a, FomaRegex('b') as b:
assert a.fsa_file != b.fsa_file
|
homeworks/homework3/homework3.ipynb
|
bmeaut/python_nlp_2017_fall
|
mit
|
Example of Cell Magic Using 3 '%%%'
|
%%%timeit
walker = RandomWalker()
walk = [position for position in walker.walk(10000)]
|
Numpy Tutorial.ipynb
|
211217613/python_meetup
|
unlicense
|
Line magic uses 1 '%'
Example Using Functional Programming
Remove class definition
|
def random_walk_f(n):
position = 0
walk = [position]
for i in range(n):
position = 2 * random.randint(0,1) - 1
walk.append(position)
return walk
%%%timeit
walk = random_walk_f(10000)
|
Numpy Tutorial.ipynb
|
211217613/python_meetup
|
unlicense
|
small improvement in time
Vectorized Approach Like When You Did Things in MATLAB :(
Get rid of the loop
|
from itertools import accumulate
def random_walker_v(n):
steps = random.sample([1, -1] * n, n)
return list(accumulate(steps))
%%%timeit
walk = random_walker_v(10000)
|
Numpy Tutorial.ipynb
|
211217613/python_meetup
|
unlicense
|
WOW 2x as fast
Numpy ifying
|
import numpy as np
def random_walker_np(n):
steps = 2 * np.random.randint(0, 2, size=n) - 1
return np.cumsum(steps)
%%%timeit
walk = random_walker_np(10000)
|
Numpy Tutorial.ipynb
|
211217613/python_meetup
|
unlicense
|
Getting Started with Basic Numpy Array
Create an array
Clobber the namespace so we dont have to np.<name>
|
import numpy as np
|
Numpy Tutorial.ipynb
|
211217613/python_meetup
|
unlicense
|
Create an np array. You can pass any type of python seq: list, tuples, etc
|
a = np.array([0,1,2,3,4,5])
a
|
Numpy Tutorial.ipynb
|
211217613/python_meetup
|
unlicense
|
Multidimensional array using list of lists
|
m = np.array([[1,2,3], [4,5,6]])
m.shape
ad = a.data
list(ad)
# what type is a
type(a)
# what is the numerica type of the elements in the array
a.dtype
# What shape (dimensions) is the array
a.shape
# Bytes per element. 32bit integers should be 4 bytes
a.itemsize
# Total size in bytes of the array
a.nbytes
# Beware of type coercion
# a holds dtypes int32
print(a)
a[0] = 10.38383
print(a)
x = np.array([0,1,1.5,3])
y = np.array([1,2,3,1])
|
Numpy Tutorial.ipynb
|
211217613/python_meetup
|
unlicense
|
Reshape and Resize
Operations
|
# Element wise addition
# Element wise subtraction
|
Numpy Tutorial.ipynb
|
211217613/python_meetup
|
unlicense
|
Do Some Vector Math Not for Loop Math
|
%%%timeit
dy = y[1:] - y[:-1]
|
Numpy Tutorial.ipynb
|
211217613/python_meetup
|
unlicense
|
%%capture <varname> captures the result of the operation into a var
|
%%capture timeit_result
%timeit python_list1 = range(1,1000)
%timeit python_list2 = np.arange(1,1000)
print(timeit_result)
|
Numpy Tutorial.ipynb
|
211217613/python_meetup
|
unlicense
|
Statistical Analysis
|
data_set = random.random((2,3))
print(data_set)
# example of namespace....cant access np.max and builtin max is being used
max(data_set[0])
|
Numpy Tutorial.ipynb
|
211217613/python_meetup
|
unlicense
|
Then, we need to go through all the rows in the file, and for each add the RecombinantFraction to the right Line and InfectionStatus. To do so, we need to choose a data structure. Here we use a dictionary, where the keys are given by Line, and each value of the dictionary is another dictionary where the keys W and I index lists of RecombinantFraction.
|
my_data = {}
with open('../data/Singh2015_data.csv') as csvfile:
reader = csv.DictReader(csvfile)
for row in reader:
my_line = row['Line']
my_status = row['InfectionStatus']
my_recomb = float(row['RecombinantFraction'])
# Test by printing the values
print(my_line, my_status, my_recomb)
# just print the first row
break
|
python/solutions/Singh2015_solution.ipynb
|
StefanoAllesina/ISC
|
gpl-2.0
|
Train
I found some help with parameters here:
* https://github.com/JohnLangford/vowpal_wabbit/wiki/Tutorial
* https://github.com/JohnLangford/vowpal_wabbit/wiki/Command-line-arguments
--cache_file train.cache
converts train_ALL.vw to a binary file for future faster processing.
Next time we go through the model building, we will use the cache file
and not the text file.
--passes
is the number of passes
--oaa 10
refers to oaa learning algorithm with 10 classes (1 to 10)
-q ii
creates interaction between variables in the two referred to namespaces
which here are the same i.e. 'image' Namespace.
An interaction variable is created from two variables 'A' and 'B'
by multiplying the values of 'A' and 'B'.
-f mnist_ALL.model
refers to file where model will be saved.
-b
refers to number of bits in the feature table.
Default number is 18 but as we have increased the number of features much more
by introducing interaction features, value of '-b' has been increased to 22.
-l rate
Adjust the learning rate. Defaults to 0.5
--power_t p
This specifies the power on the learning rate decay. You can adjust this --power_t p where p is in the range [0,1]. 0 means the learning rate does not decay, which can be helpful when state tracking, while 1 is very aggressive. Defaults to 0.5
|
!rm train_logmulti.vw.cache
!rm mnist_train_logmulti.model
!vw -d data/mnist_train.vw -b 19 --ect 10 -f mnist_train_logmulti.model -q ii --passes 100 -l 0.4 --early_terminate 3 --cache_file train_logmulti.vw.cache --power_t 0.6
|
vw/VW_benchmark_log_multi.ipynb
|
grfiv/MNIST
|
mit
|
Predict
-t
is for test file
-i
specifies the model file created earlier
-p
where to store the class predictions [1,10]
|
!rm predict_logmulti.txt
!vw -t data/mnist_test.vw -i mnist_train_logmulti.model -p predict_logmulti.txt
|
vw/VW_benchmark_log_multi.ipynb
|
grfiv/MNIST
|
mit
|
Analyze
|
y_true=[]
with open("data/mnist_test.vw", 'rb') as f:
for line in f:
m = re.search('^\d+', line)
if m:
found = m.group()
y_true.append(int(found))
y_pred = []
with open("predict_logmulti.txt", 'rb') as f:
for line in f:
m = re.search('^\d+', line)
if m:
found = m.group()
y_pred.append(int(found))
target_names = ["1", "2", "3", "4", "5", "6", "7", "8", "9", "10"] # NOTE: plus one
def plot_confusion_matrix(cm,
target_names,
title='Proportional Confusion matrix: VW log_multi on 784 pixels',
cmap=plt.cm.Paired):
"""
given a confusion matrix (cm), make a nice plot
see the skikit-learn documentation for the original done for the iris dataset
"""
plt.figure(figsize=(8, 6))
plt.imshow((cm/cm.sum(axis=1)), interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(target_names))
plt.xticks(tick_marks, target_names, rotation=45)
plt.yticks(tick_marks, target_names)
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
cm = confusion_matrix(y_true, y_pred)
print(cm)
model_accuracy = sum(cm.diagonal())/len(y_pred)
model_misclass = 1 - model_accuracy
print("\nModel accuracy: {0}, model misclass rate: {1}".format(model_accuracy, model_misclass))
plot_confusion_matrix(cm, target_names)
|
vw/VW_benchmark_log_multi.ipynb
|
grfiv/MNIST
|
mit
|
Por supuesto, voy a utilizar los datos de casos de COVID-19 en Uruguay. No los tengo completos completos, pero gracias a la gente de GUIAD-Covid-19, me puedo acercar. De esos datos, para más magia, voy a usar solamente dos atributos: el día desde el que empezamos a medir, y la cantidad de casos.
|
covid=pd.read_csv('https://raw.githubusercontent.com/natydasilva/COVID19-UDELAR/master/Datos/Datos_Nacionales/estadisticasUY.csv?token=ABCA7RFDBSMT4PMGJMNCXQS6RUVRA')
covid
data=covid.loc[3:,['dia', 'acumTestPositivos']]
data
|
src/Sobreajustando.ipynb
|
gmonce/datascience
|
gpl-3.0
|
Y voy a utilizar una función polinomial de orden 5, para ver si podemos ajustar a los datos y encontrar un patrón. Este procedimiento se llama regresión, y es una de las herramientas de la Inteligencia Artificial.
|
degree=5
x=data['dia']
x_plot=np.linspace(4,19,20)
y=data['acumTestPositivos']
X=x[:,np.newaxis]
X_plot=x_plot[:,np.newaxis]
plt.scatter(x,y, color='cornflowerblue', linewidth=2,
label="ground truth")
model = make_pipeline(PolynomialFeatures(degree), Ridge())
model.fit(X, y)
y_plot = model.predict(X_plot)
plt.plot(x_plot, y_plot, color='teal', linewidth=2,
label="degree %d" % degree)
plt.title("Casos positivos de COVID-19 en Uruguay")
plt.xlabel("día")
plt.ylabel("casos confirmados")
|
src/Sobreajustando.ipynb
|
gmonce/datascience
|
gpl-3.0
|
Por increíble que parezca, hemos encontrado una función que ajusta casi perfectamente a los casos que se han confirmado como positivos en Uruguay. Y ahora, el toque final: utilicemos esta función para predecir cuántos casos habrá en 10 días, de continuar con este ritmo.
|
x_plot=x_plot=np.linspace(0,30,30)
plt.scatter(x,y, color='cornflowerblue', linewidth=2,
label="ground truth")
X_plot=x_plot[:,np.newaxis]
y_plot = model.predict(X_plot)
plt.plot(x_plot, y_plot, color='teal', linewidth=2,
label="degree %d" % degree)
plt.title("Predicción:Casos positivos de COVID-19 en Uruguay")
plt.xlabel("día")
plt.ylabel("casos confirmados")
|
src/Sobreajustando.ipynb
|
gmonce/datascience
|
gpl-3.0
|
Import Python packages
Execute the command below (Shift + Enter) to load all the python libraries we'll need for the lab.
|
import datetime
import pickle
import os
import pandas as pd
import xgboost as xgb
import numpy as np
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import FeatureUnion, make_pipeline
from sklearn.utils import shuffle
from sklearn.base import clone
from sklearn.model_selection import train_test_split
from witwidget.notebook.visualization import WitWidget, WitConfigBuilder
import custom_transforms
import warnings
warnings.filterwarnings(action='ignore', category=DeprecationWarning)
|
quests/dei/census/income_xgboost.ipynb
|
turbomanage/training-data-analyst
|
apache-2.0
|
Download and process data
The models you'll build will predict the income level, whether it's less than or equal to $50,000 per year, of individuals given 14 data points about each individual. You'll train your models on this UCI Census Income Dataset.
We'll read the data into a Pandas DataFrame to see what we'll be working with. It's important to shuffle our data in case the original dataset is ordered in a specific way. We use an sklearn utility called shuffle to do this, which we imported in the first cell:
|
train_csv_path = 'https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data'
COLUMNS = (
'age',
'workclass',
'fnlwgt',
'education',
'education-num',
'marital-status',
'occ|upation',
'relationship',
'race',
'sex',
'capital-gain',
'capital-loss',
'hours-per-week',
'native-country',
'income-level'
)
raw_train_data = pd.read_csv(train_csv_path, names=COLUMNS, skipinitialspace=True)
raw_train_data = shuffle(raw_train_data, random_state=4)
|
quests/dei/census/income_xgboost.ipynb
|
turbomanage/training-data-analyst
|
apache-2.0
|
Now you're ready to build and train your first model!
Build a First Model
The model we build closely follows a template for the census dataset found on AI Hub. For our model we use an XGBoost classifier. However, before we train our model we have to pre-process the data a little bit. We build a processing pipeline using Scikit-Learn's Pipeline constructor. We appl some custom transformations that are defined in custom_transforms.py. Open the file custom_transforms.py and inspect the code. Out features are either numerical or categorical. The numerical features are age-num, and hours-per-week. These features will be processed by applying Scikit-Learn's StandardScaler function. The categorical features are workclass, education, marital-status, and relationship. These features are one-hot encoded.
|
numerical_indices = [0, 12]
categorical_indices = [1, 3, 5, 7]
p1 = make_pipeline(
custom_transforms.PositionalSelector(categorical_indices),
custom_transforms.StripString(),
custom_transforms.SimpleOneHotEncoder()
)
p2 = make_pipeline(
custom_transforms.PositionalSelector(numerical_indices),
StandardScaler()
)
p3 = FeatureUnion([
('numericals', p1),
('categoricals', p2),
])
|
quests/dei/census/income_xgboost.ipynb
|
turbomanage/training-data-analyst
|
apache-2.0
|
Now it's time to deploy the model. We can do that with this gcloud command:
|
%%bash
MODEL_NAME="census_income_classifier"
VERSION_NAME="original"
MODEL_DIR="gs://$QWIKLABS_PROJECT_ID/original/"
CUSTOM_CODE_PATH="gs://$QWIKLABS_PROJECT_ID/custom_transforms-0.1.tar.gz"
gcloud beta ai-platform versions create $VERSION_NAME \
--model $MODEL_NAME \
--runtime-version 1.15 \
--python-version 3.7 \
--origin $MODEL_DIR \
--package-uris $CUSTOM_CODE_PATH \
--prediction-class predictor.MyPredictor
|
quests/dei/census/income_xgboost.ipynb
|
turbomanage/training-data-analyst
|
apache-2.0
|
Test your model by running this code:
|
!gcloud ai-platform predict --model=census_income_classifier --json-instances=predictions.json --version=original
|
quests/dei/census/income_xgboost.ipynb
|
turbomanage/training-data-analyst
|
apache-2.0
|
Deploy the model to AI Platform using the following bash script:
|
%%bash
gsutil cp model.pkl gs://$QWIKLABS_PROJECT_ID/balanced/
MODEL_NAME="census_income_classifier"
VERSION_NAME="balanced"
MODEL_DIR="gs://$QWIKLABS_PROJECT_ID/balanced/"
CUSTOM_CODE_PATH="gs://$QWIKLABS_PROJECT_ID/custom_transforms-0.1.tar.gz"
gcloud beta ai-platform versions create $VERSION_NAME \
--model $MODEL_NAME \
--runtime-version 1.15 \
--python-version 3.7 \
--origin $MODEL_DIR \
--package-uris $CUSTOM_CODE_PATH \
--prediction-class predictor.MyPredictor
|
quests/dei/census/income_xgboost.ipynb
|
turbomanage/training-data-analyst
|
apache-2.0
|
We can make pretty graphs
|
import matplotlib.pyplot as plt
import math
import numpy as np
%matplotlib inline
t = np.arange(0., 5., 0.2)
plt.plot(t, t, 'r--', t, t**2, 'bs')
|
Intro to Jupyter Notebooks.ipynb
|
211217613/python_meetup
|
unlicense
|
Now we make a relatively complex matplotlib figure using subplot functions.
|
fig = plt.figure(figsize=(4,4))
ax1 = plt.subplot2grid((3,3), (0,0), colspan=3)
ax2 = plt.subplot2grid((3,3), (1,0), colspan=2)
ax3 = plt.subplot2grid((3,3), (1, 2), rowspan=2)
ax4 = plt.subplot2grid((3,3), (2, 0))
ax5 = plt.subplot2grid((3,3), (2, 1))
|
examples/mpl_to_svg_layout/mpl_fig_to_figurefirst_svg.ipynb
|
FlyRanch/figurefirst
|
mit
|
Now import figurefirst, and with a single function call we generate an svg file ('test_mpl_conversion.svg'), which is saved to disk, and automatically loaded as a FigureFirst layout object, ready to go.
|
import figurefirst as fifi
reload(fifi.mpl_fig_to_figurefirst_svg)
layout = fifi.mpl_fig_to_figurefirst_svg.mpl_fig_to_figurefirst_svg(fig, 'test_mpl_conversion.svg')
# Now lets look at the SVG file (and close the automatically displayed matplotlib figure)
plt.close()
from IPython.display import display,SVG
display(SVG('test_mpl_conversion.svg'))
|
examples/mpl_to_svg_layout/mpl_fig_to_figurefirst_svg.ipynb
|
FlyRanch/figurefirst
|
mit
|
You can now open the svg file in an svg editor, like inkscape, make adjustments to the rectangles, and reload the file using the standard FigureFirst API.
Try it, and see the resulting changes below.
|
layout = fifi.svg_to_axes.FigureLayout('test_mpl_conversion.svg', make_mplfigures=True)
|
examples/mpl_to_svg_layout/mpl_fig_to_figurefirst_svg.ipynb
|
FlyRanch/figurefirst
|
mit
|
Using Matplotlib and FigureFirst layouts together
Suppose you would like to make a figure with 3 panels, where each panel contains a grid of axes that are easier to generate using Matplotlib functions than they are to draw. Or maybe you want the layout of these axes to be controlled based on the data. This can be accomplished by bringing together the functions described above, and the primary FigureFirst workflow.
First we generate a new Matplotlib figure, as before.
|
fig = plt.figure(figsize=(4,4))
ax1 = plt.subplot2grid((3,3), (0,0), colspan=3)
ax2 = plt.subplot2grid((3,3), (1,0), colspan=2)
ax3 = plt.subplot2grid((3,3), (1, 2), rowspan=2)
ax4 = plt.subplot2grid((3,3), (2, 0))
ax5 = plt.subplot2grid((3,3), (2, 1))
|
examples/mpl_to_svg_layout/mpl_fig_to_figurefirst_svg.ipynb
|
FlyRanch/figurefirst
|
mit
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.