markdown stringlengths 0 37k | code stringlengths 1 33.3k | path stringlengths 8 215 | repo_name stringlengths 6 77 | license stringclasses 15
values |
|---|---|---|---|---|
Problem 1 - Cross-validation (10 pts)
Cross-validation is the method that we can use to choose hyperparameters, such as the ridge penalty $\lambda$ in ridge regression. It is an empirical technique that is incredibly effective, even if the data does not exactly fit the assumptions of your model (e.g. if the errors are ... | # Load data!
p1a_file = np.load('homework_2_p1a_data.npz')
x = p1a_file['x']
y = p1a_file['y']
x_test = p1a_file['x_test']
y_test = p1a_file['y_test']
n_training, n_features = x.shape | homeworks/homework_2/homework_2.ipynb | alexhuth/n4cs-fa2017 | gpl-3.0 |
(a) Leave-one-out (LOO) cross-validation (3 pts)
This is perhaps the simplest cross-validation method. Select one datapoint from the training set to be the validation set, then train on the other $n-1$ datapoints. Test on the remaining datapoint. Repeat this process for each datapoint, and then average the results.
Thi... | lambdas = np.logspace(-3, 5, 10)
val_ses = np.zeros((n_training, len(lambdas)))
def ridge(x, y, lam):
"""This function does ridge regression with the stimuli x and responses y with
ridge parameter lam (short for lambda). It returns the weights.
This is definitely not the most efficient way to do this, but ... | homeworks/homework_2/homework_2.ipynb | alexhuth/n4cs-fa2017 | gpl-3.0 |
(b) $k$-fold cross-validation (3 pts)
You may have noticed that LOO CV is quite slow. That's because it needs to fit one model per datapoint. If you have a lot of datapoint, that's just not gonna work! Further, as mentioned above, LOO only works well when your datapoints are independent. Suppose you're recording fMRI d... | k = 6 # let's do 6 folds
n_per_fold = n_training / k # number of datapoints per fold
lambdas = np.logspace(-3, 5, 10)
val_mses = np.zeros((k, len(lambdas)))
for fold in range(k):
# split the training dataset into two parts: one with only the points in fold "fold"
# and one with all the other datapoints
... | homeworks/homework_2/homework_2.ipynb | alexhuth/n4cs-fa2017 | gpl-3.0 |
(c) Monte Carlo cross-validation (4 pts)
One issue with $k$-fold CV is that the size of the validation set depends on the number of folds. If you want really stable estimates for your hyperparameter, you want to have a pretty large validation set, but also do a lot of folds. You can accomplish this by, on each iteratio... | n_mc_iters = 50 # let's do 50 Monte Carlo iterations
n_per_mc_iter = 50 # on each MC iteration, hold out 50 datapoints to be the validation set
lambdas = np.logspace(-3, 5, 10)
val_mses = np.zeros((n_training, len(lambdas)))
for it in range(n_mc_iters):
# split the training dataset into two parts: one with a ran... | homeworks/homework_2/homework_2.ipynb | alexhuth/n4cs-fa2017 | gpl-3.0 |
Problem 2 - Multiple feature spaces (20 pts)
Suppose you've done an experiment and measured responses to a bunch of stimuli. You've got three different hypotheses about how the stimuli might be represented in the responses. You instantiate these hypotheses as three different linearizing transforms, giving you three dif... | from homework_2_utils import make_data
num_training = 500 # total number of datapoints in training set
num_test = 100 # total number of datapoints in test set
num_features = [12, 50, 100] # number of features in each feature space
num_responses = 35 # number of responses (voxels or neurons)
# This is just a bunch of ... | homeworks/homework_2/homework_2.ipynb | alexhuth/n4cs-fa2017 | gpl-3.0 |
(a) - Deciding which is best (8 pts)
Construct separate linear models using each of the three different feature spaces. Use those models to predict responses in the test set. Compute the prediction performance of each model. Which model is the best overall? (Don't worry about statistical comparison, just choose the one... | lambdas = np.logspace(1, 7, 15) # use these lambdas
## YOUR CODE HERE ##
# Plot mean validation set MSE as a function of lambda for each feature space
## YOUR CODE HERE ##
# Find best lambda for each feature space
best_lambda_1 = ## YOUR CODE HERE ##
best_lambda_2 = ## YOUR CODE HERE ##
best_lambda_3 = ## YOUR CODE... | homeworks/homework_2/homework_2.ipynb | alexhuth/n4cs-fa2017 | gpl-3.0 |
(b) - Variance partitioning (12 pts)
Each of these feature spaces explain something about the response. Does the variance explained by each feature space overlap with the others? Do variance partitioning to find out!
There should be 7 variance partitions: one for the unique contribution of each feature space, one for t... | # Here's a function that computes R^2 of a_hat predicting a
Rsq = lambda a, a_hat: 1 - (a - a_hat).var() / a.var()
# Here it's probably useful to define a cv_ridge function that you can call a bunch of times
def cv_ridge(x, y, x_test, y_test):
lambdas = np.logspace(1, 7, 15) # use these lambdas
## YOUR CODE H... | homeworks/homework_2/homework_2.ipynb | alexhuth/n4cs-fa2017 | gpl-3.0 |
Modèle boite noire 02
L'idée est d'estimer les paramètres du modèle à partir de la mesure expérimentale de la température intérieure ($T$) et des données météo. On n'obtient pas forcement une valeur physique significative pour ces coefficients, mais cela permet d'estimer et de comparer les variations de jour en jour, e... | df_full = pd.read_pickle( 'weatherdata.pck' )
df = df_full[['T_int', 'temperature', 'flux_tot', 'windSpeed']].copy() | BlackBoxModel02.ipynb | xdze2/thermique_appart | mit |
On a l'enregistrement de la température intérieure ($T_{int}$) et la température extérieure ('temperature') : | df[['T_int', 'temperature']].plot( figsize=(14, 4) ); plt.ylabel('°C'); | BlackBoxModel02.ipynb | xdze2/thermique_appart | mit |
Et le flux solaire, calculé pour mon appartement, et projété suivant la surface et l'orientation de mes fenètres (Velux): | # Flux solaire sur les vitres:
df[['flux_tot']].plot( figsize=(14, 4) ); plt.ylabel('Watt'); | BlackBoxModel02.ipynb | xdze2/thermique_appart | mit |
Solveur équation diff. (ODE)
L'équation est résolue en temps avec odeint de scipy (doc, OdePack). | from scipy.integrate import odeint
def get_dTdt( T, t, params, get_Text, get_Phi ):
""" dérivé de la température p/r au temps
params : [ h/M , eta/M ]
get_Text, get_Phi: fonctions d'interpolation
"""
T_ext = get_Text( t )
phi = get_Phi( t )
dTdt = params[0] * ( T_e... | BlackBoxModel02.ipynb | xdze2/thermique_appart | mit |
Rq: Facteur 100 et 1e-6 pour avoir des valeurs proches de l'unité, et le même ordre de grandeur entre $\Phi$ et $\Delta T$... pour l'optimisation
Test sur les données entières : | params = ( 3, 3 )
res = apply_model( df, 30, params )
plt.figure( figsize=(14, 4) )
plt.plot( res )
plt.plot( df['T_int'].as_matrix() ) ; | BlackBoxModel02.ipynb | xdze2/thermique_appart | mit |
Estimation jour par jour
Les paramètres $\eta$ et $h$ ne sont en réalité pas constant. Ils dépendante de l'usage de l'appartement, principalement de l'ouverture des fenêtres et à la position des volets sur celle-ci. Ils sont donc fonctions de la période de la journée, et de la météo. L'idée est d'estimer leurs valeurs ... | def get_errorfit( params, data, T_start ):
""" Calcul le modele pour les données et les paramètres
puis calcul l'erreur avec les données expérimentals (non NaN)
"""
T_exp = data['T_int'].as_matrix()
T_theo = apply_model( data, T_start, params )
delta = (T_exp - T_theo)**2
re... | BlackBoxModel02.ipynb | xdze2/thermique_appart | mit |
Découpage en période jour / nuit | """ Estimation des périodes de jour et de nuit à partir du flux solaire
"""
df['isnight'] = ( df['flux_tot'] == 0 ).astype(int)
# Permet de numéroter les périodes :
nights_days = df['isnight'].diff().abs().cumsum()
nights_days[0] = 0
df_byday = df.groupby(nights_days)
Groupes = [ int( k ) for k in df_byday.groups... | BlackBoxModel02.ipynb | xdze2/thermique_appart | mit |
Ajustement période par période | def fit_a_day( data, T_zero ):
""" Estime le modèle sur les données 'data'
avec la température initial 'T_zero'
"""
# Gestion des données exp. manquante :
T_exp = data['T_int'].as_matrix()
nombre_nonNaN = T_exp.size - np.isnan( T_exp ).sum()
if nombre_nonNaN < 10:
# pas assez d... | BlackBoxModel02.ipynb | xdze2/thermique_appart | mit |
Tracé pour une période (debug) | len( Groupes )
data = df_byday.get_group( Groupes[1] )
T_int = data['T_int'].interpolate().as_matrix()
T_zero = T_int[ ~ np.isnan( T_int ) ][0]
params, res = fit_a_day( data, T_zero )
print( data.index[0] )
print( params )
plt.figure( figsize=(14, 5) )
plt.plot(data.index, res, '--', label='T_theo' )
plt.plot(da... | BlackBoxModel02.ipynb | xdze2/thermique_appart | mit |
Calcul pour toutes les périodes :
prends du temps | # init
df['T_theo'] = 0
df['eta_M'], df['h_M'] = 0, 0
# valeur initiale
T_zero = df['T_int'][ df['T_int'].first_valid_index() ]
for grp_id in Groupes:
print( '%i, ' % grp_id, end='' )
data_day = df_byday.get_group( grp_id )
# debug cas où aucun donnée exp. :
if np.isnan( T_zero ):
T_int... | BlackBoxModel02.ipynb | xdze2/thermique_appart | mit |
Ordres de grandeur
eta : c'est un pourcentage entre 0 et 100
h : Il y a deux contibutions:
- h_min : L'isolation globale du batiment (mur, toit et surtout fenêtres). Cette valeur doit être constante.
- h_aero : Les infiltrations d'air et aérations.
M : La masse thermique
M ~ 0.1e6 J/K ???
h_min corr... | U_vitrage = 2.8 # W/m2/K, typiquement pour du double vitrage
U_cadres = 0.15 + 0.016 # W/m/K, pour un cadre en bois de section carré, c'est en fait la conductivité du bois
# + psi ...
aire_vitre = 0.6*0.8*2 + 1.2*0.8 + 0.3*0.72*4 + 0.25**2 # m2
perimetre = (0.6+0.8)*4 + (1.2+0.8)*2 +... | BlackBoxModel02.ipynb | xdze2/thermique_appart | mit |
Résidus | R = df['T_int'] - df['T_theo']
R.plot( figsize=(14, 5), style='k' ); plt.ylabel('°C');
# Plot variation relative temp.
plt.figure( );
for grp_id in Groupes:
data = df_byday.get_group( grp_id )
if np.isnan( data['T_int'] ).all():
continue
T_int = data['T_int'].as_matrix()
Tmi... | BlackBoxModel02.ipynb | xdze2/thermique_appart | mit |
3. Enter DV360 User Audit Recipe Parameters
DV360 only permits SERVICE accounts to access the user list API endpoint, be sure to provide and permission one.
Wait for BigQuery->->->DV_... to be created.
Wait for BigQuery->->->Barnacle_... to be created, then copy and connect the following data sources.
Join the StarThi... | FIELDS = {
'auth_read':'user', # Credentials used for writing data.
'auth_write':'service', # Credentials used for writing data.
'partner':'', # Partner ID to run user audit on.
'recipe_slug':'', # Name of Google BigQuery dataset to create.
}
print("Parameters Set To: %s" % FIELDS)
| colabs/barnacle_dv360.ipynb | google/starthinker | apache-2.0 |
4. Execute DV360 User Audit
This does NOT need to be modified unless you are changing the recipe, click play. | from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
TASKS = [
{
'dataset':{
'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Credentials used for writing data.'}},
'dataset':{'field':{'name':'r... | colabs/barnacle_dv360.ipynb | google/starthinker | apache-2.0 |
We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed rep... | # mnist.train.images[0]
# Size of the encoding layer (the hidden layer)
encoding_dim = 32 # feel free to change this value
image_shape = mnist.train.images.shape[1]
inputs_ = tf.placeholder(tf.float32,shape=(None,image_shape),name='inputs')
targets_ = tf.placeholder(tf.float32,shape=(None,image_shape),name='targets'... | autoencoder/Simple_Autoencoder.ipynb | zhuanxuhit/deep-learning | mit |
Define an op to dequeue a line from file: | reader = tf.TextLineReader()
key_op, value_op = reader.read(filename_queue) | ch02_basics/Concept10_queue_text.ipynb | BinRoot/TensorFlow-Book | mit |
Start all queue runners collected in the graph: | sess = tf.InteractiveSession()
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord) | ch02_basics/Concept10_queue_text.ipynb | BinRoot/TensorFlow-Book | mit |
Try reading lines from the file by dequeuing: | for i in range(100):
key, value = sess.run([key_op, value_op])
print(key, value) | ch02_basics/Concept10_queue_text.ipynb | BinRoot/TensorFlow-Book | mit |
和之前的章节不一样,关闭沙盒数据接口,使用实时数据源提供数据: | abupy.env.disable_example_env_ipython() | abupy_lecture/19-数据源(ABU量化使用文档).ipynb | bbfamily/abu | gpl-3.0 |
1. 数据模式的切换
使用g_data_fetch_mode可以查看当前的数据模式: | abupy.env.g_data_fetch_mode | abupy_lecture/19-数据源(ABU量化使用文档).ipynb | bbfamily/abu | gpl-3.0 |
默认使用模式为E_DATA_FETCH_NORMAL,NORMAL的意义是默认优先从缓存中获取,如果缓存中不存在,再访问网络,尝试从网络获取,除此之外还有一些优化,比如虽然缓存中的数据也无法满足要求,但是缓存索引纪录今天已经尝试从网络获取,这种情况下也不再访问网络。
更多详情请阅读ABuDataSource代码
E_DATA_FETCH_FORCE_NET为强制使用网络进行数据更新,一般不推荐使用,如果切换了数据源,或者缓存中的数据存在问题的情况下会使用: | abupy.env.g_data_fetch_mode = EMarketDataFetchMode.E_DATA_FETCH_FORCE_NET
ABuSymbolPd.make_kl_df('usBIDU').tail() | abupy_lecture/19-数据源(ABU量化使用文档).ipynb | bbfamily/abu | gpl-3.0 |
E_DATA_FETCH_FORCE_LOCAL为强制从缓存获取,实际上在做回测的时候,使用的一般都是这种模式,因为比如编写了一个策略进行回测结果度量,通常情况下需要反复的修改策略,重新进行回测,强制使用缓存的好处是:
保证使用的数据集没有发生变化,度量结果有可比性
提高回测运行效率,特别是针对全市场回测
分类数据获取和回测,方便问题排除 | abupy.env.g_data_fetch_mode = EMarketDataFetchMode.E_DATA_FETCH_FORCE_LOCAL
ABuSymbolPd.make_kl_df('usBIDU').tail(1) | abupy_lecture/19-数据源(ABU量化使用文档).ipynb | bbfamily/abu | gpl-3.0 |
下面把数据获取模式恢复为默认的E_DATA_FETCH_NORMAL: | abupy.env.g_data_fetch_mode = EMarketDataFetchMode.E_DATA_FETCH_NORMAL | abupy_lecture/19-数据源(ABU量化使用文档).ipynb | bbfamily/abu | gpl-3.0 |
2. 数据存储的切换
默认的缓存数据存储模式为CSV,如下所示: | abupy.env.g_data_cache_type | abupy_lecture/19-数据源(ABU量化使用文档).ipynb | bbfamily/abu | gpl-3.0 |
缓存的csv数据存贮文件路径在~/abu/data/csv/,可使运行下面命令直接打开目录: | if abupy.env.g_is_mac_os:
!open $abupy.env.g_project_kl_df_data_csv
else:
!echo $abupy.env.g_project_kl_df_data_csv | abupy_lecture/19-数据源(ABU量化使用文档).ipynb | bbfamily/abu | gpl-3.0 |
可以通过g_data_cache_type切换其它存贮模式,如下使用HDF5进行数据存贮: | abupy.env.g_data_cache_type = EDataCacheType.E_DATA_CACHE_HDF5 | abupy_lecture/19-数据源(ABU量化使用文档).ipynb | bbfamily/abu | gpl-3.0 |
缓存的hdf5文件路径在~/abu/data/df_kl.h5,可使运行下面命令直接打开目录, 或者显示完整路径: | if abupy.env.g_is_mac_os:
!open $abupy.env.g_project_data_dir
else:
!echo $abupy.env.g_project_data_dir
ABuSymbolPd.make_kl_df('usTSLA').tail() | abupy_lecture/19-数据源(ABU量化使用文档).ipynb | bbfamily/abu | gpl-3.0 |
下面仍然切换回默认的csv模式,csv的优点是存贮空间需要小,可以并行读写,且针对不同平台兼容性好: | abupy.env.g_data_cache_type = EDataCacheType.E_DATA_CACHE_CSV | abupy_lecture/19-数据源(ABU量化使用文档).ipynb | bbfamily/abu | gpl-3.0 |
3. 数据源的切换
如下显示当前的数据源g_market_source为百度数据源: | abupy.env.g_market_source | abupy_lecture/19-数据源(ABU量化使用文档).ipynb | bbfamily/abu | gpl-3.0 |
如下想要获取比特币的数据,但是输出显示百度的数据源不支持比特币: | ABuSymbolPd.make_kl_df('btc') | abupy_lecture/19-数据源(ABU量化使用文档).ipynb | bbfamily/abu | gpl-3.0 |
切换数据源为火币数据源,即可正常获取数据,如下: | abupy.env.g_market_source = EMarketSourceType.E_MARKET_SOURCE_hb_tc
ABuSymbolPd.make_kl_df('btc').tail() | abupy_lecture/19-数据源(ABU量化使用文档).ipynb | bbfamily/abu | gpl-3.0 |
类似,下面想要获取期货鸡蛋的数据,但是输出显示火币的数据源不支持期货市场: | ABuSymbolPd.make_kl_df('jd0') | abupy_lecture/19-数据源(ABU量化使用文档).ipynb | bbfamily/abu | gpl-3.0 |
切换数据源为新浪期货数据源,即可正常获取数据,如下: | abupy.env.g_market_source = EMarketSourceType.E_MARKET_SOURCE_sn_futures
ABuSymbolPd.make_kl_df('jd0').tail() | abupy_lecture/19-数据源(ABU量化使用文档).ipynb | bbfamily/abu | gpl-3.0 |
4. 全市场数据的更新
在之前的章节使用abu.run_loop_back进行交易回测,使用的都是沙盒数据,如果需要使用实时数据,特别是回测交易多的情况下,比如全市场测试,推荐在使用abu.run_loop_back进行回测前先使用abu.run_kl_update将数据进行更新。
在run_kl_update中会首先强制使用网络数据进行全市场数据更新,在更新完毕后会将全市场的交易数据都写入缓存,再abu.run_loop_back运行回测的时候使用本地数据模式,即实现数据更新与策略回测分离,运行效率提高。
下面的代码将分别获取美股,A股,港股,期货,比特币,莱特币6年的交易数据,在后面的章节将分别使用这些数据做回测示例,读者可只获... | if abupy.env.g_is_mac_os:
!open $abupy.env.g_project_data_dir
else:
!echo $abupy.env.g_project_data_dir | abupy_lecture/19-数据源(ABU量化使用文档).ipynb | bbfamily/abu | gpl-3.0 |
如果不想通过直接下载数据文件的方式,也可下面通过切换至腾讯数据源,然后进行美股数据全市场更新:
备注:耗时操作,大概需要运行15分钟左右,可以在做其它事情的时候运行 | %%time
abupy.env.g_market_source = EMarketSourceType.E_MARKET_SOURCE_tx
abupy.env.g_data_cache_type = EDataCacheType.E_DATA_CACHE_CSV
abu.run_kl_update(start='2011-08-08', end='2017-08-08', market=EMarketTargetType.E_MARKET_TARGET_US, n_jobs=10) | abupy_lecture/19-数据源(ABU量化使用文档).ipynb | bbfamily/abu | gpl-3.0 |
如果不想通过直接下载数据文件的方式,也可切换至百度数据源,然后进行A股数据全市场更新:
备注:耗时操作,大概需要运行20分钟左右,可以在做其它事情的时候运行 | %%time
abupy.env.g_market_source = EMarketSourceType.E_MARKET_SOURCE_bd
abupy.env.g_data_cache_type = EDataCacheType.E_DATA_CACHE_CSV
abu.run_kl_update(start='2011-08-08', end='2017-08-08', market=EMarketTargetType.E_MARKET_TARGET_CN, n_jobs=10) | abupy_lecture/19-数据源(ABU量化使用文档).ipynb | bbfamily/abu | gpl-3.0 |
如果不想通过直接下载数据文件的方式,也可切换至网易数据源,然后进行港股数据全市场更新:
备注:耗时操作,大概需要运行5分钟左右,可以在做其它事情的时候运行 | %%time
abupy.env.g_market_source = EMarketSourceType.E_MARKET_SOURCE_nt
abupy.env.g_data_cache_type = EDataCacheType.E_DATA_CACHE_CSV
abu.run_kl_update(start='2011-08-08', end='2017-08-08', market=EMarketTargetType.E_MARKET_TARGET_HK, n_jobs=10) | abupy_lecture/19-数据源(ABU量化使用文档).ipynb | bbfamily/abu | gpl-3.0 |
切换至新浪期货数据源,然后进行期货数据全市场更新:
备注:非耗时操作,大概30秒 | %%time
abupy.env.g_market_source = EMarketSourceType.E_MARKET_SOURCE_sn_futures
abupy.env.g_data_cache_type = EDataCacheType.E_DATA_CACHE_CSV
abu.run_kl_update(start='2011-08-08', end='2017-08-08', market=EMarketTargetType.E_MARKET_TARGET_FUTURES_CN, n_jobs=4) | abupy_lecture/19-数据源(ABU量化使用文档).ipynb | bbfamily/abu | gpl-3.0 |
切换至火币数据源,然后进行比特币,莱特币数据全市场更新:
备注:非耗时操作,大概需要5秒 | %%time
abupy.env.g_market_source = EMarketSourceType.E_MARKET_SOURCE_hb_tc
abupy.env.g_data_cache_type = EDataCacheType.E_DATA_CACHE_CSV
abu.run_kl_update(start='2011-08-08', end='2017-08-08', market=EMarketTargetType.E_MARKET_TARGET_TC, n_jobs=2) | abupy_lecture/19-数据源(ABU量化使用文档).ipynb | bbfamily/abu | gpl-3.0 |
5. 接入外部数据源,股票数据源
abupy中内置的数据源:
美股市场:腾讯数据源,百度数据源,网易数据源,新浪数据源
港股市场:腾讯数据源,百度数据源,网易数据源
A股市场: 腾讯数据源,百度数据源,网易数据源
期货市场:新浪期货数据源,新浪国际期货数据源
比特币,莱特币:火币网数据源
这些数据源都只是为用户学习使用,并不能保证数据一直通畅,而且如果用户很在乎数据质量,比如有些数据源会有前复权数据错误问题,有些数据源成交量不准确等问题,那么就需要接入用户自己的数据源。
下面首先示例接入股票类型的数据源,首先实现一个数据源返回数据解析类,如下所示: | @AbuDataParseWrap()
class SNUSParser(object):
"""snus数据源解析类,被类装饰器AbuDataParseWrap装饰"""
def __init__(self, symbol, json_dict):
"""
:param symbol: 请求的symbol str对象
:param json_dict: 请求返回的json数据
"""
data = json_dict
# 为AbuDataParseWrap准备类必须的属性序列
if len(data) >... | abupy_lecture/19-数据源(ABU量化使用文档).ipynb | bbfamily/abu | gpl-3.0 |
上面编写的SNUSParser即为一个数据源返回数据解析类:
数据源解析类需要被类装饰器AbuDataParseWrap装饰
完成__init__函数,根据这里的数据json_dict来拆分成self.date,self.open,self.close,self.high,self.low,self.volume
如本例中每一条json数据的格式为:
{'d': '2017-08-08', 'o': '102.29', 'h': '102.35', 'l': '99.16', 'c': '100.07', 'v': '1834706'}
init函数目的就是通过拆解网络原始数据形成上述的五个基本序列,之后在类装饰器AbuDat... | class SNUSApi(StockBaseMarket, SupportMixin):
"""snus数据源,支持美股"""
K_NET_BASE = "http://stock.finance.sina.com.cn/usstock/api/json_v2.php/US_MinKService.getDailyK?" \
"symbol=%s&___qn=3n"
def __init__(self, symbol):
"""
:param symbol: Symbol类型对象
"""
super(SNUS... | abupy_lecture/19-数据源(ABU量化使用文档).ipynb | bbfamily/abu | gpl-3.0 |
上面编写的SNUSApi即为一个股票数据源类:
股票类型数据源类需要继承StockBaseMarket
__init__函数中指定数据源解析类
数据源类需要混入SupportMixin类,实现_support_market方法,声明支持的市场,本例只支持美股市场
数据源类需要实现kline接口,完成获取指定symbol的日线数据,将日线数据交给数据源解析类进行处理
数据源类需要实现分钟k线接口,也可以直接raise NotImplementedError
下面示例使用,如下通过check_support检测是否支持A股,结果显示False: | SNUSApi(code_to_symbol('sh601766')).check_support(rs=False) | abupy_lecture/19-数据源(ABU量化使用文档).ipynb | bbfamily/abu | gpl-3.0 |
如下通过check_support检测是否支持美股,结果显示True: | SNUSApi(code_to_symbol('usSINA')).check_support(rs=False) | abupy_lecture/19-数据源(ABU量化使用文档).ipynb | bbfamily/abu | gpl-3.0 |
如下通过kline接口获取数据,如下所示: | SNUSApi(code_to_symbol('usSINA')).kline().tail() | abupy_lecture/19-数据源(ABU量化使用文档).ipynb | bbfamily/abu | gpl-3.0 |
接入SNUSApi至abupy系统中,只需要将数据源类名称直接赋予abupy.env.g_private_data_source: | abupy.env.g_private_data_source = SNUSApi | abupy_lecture/19-数据源(ABU量化使用文档).ipynb | bbfamily/abu | gpl-3.0 |
下面使用make_kl_df接口获取sh601766数据,显示SNUSApi不支持: | ABuSymbolPd.make_kl_df('sh601766') | abupy_lecture/19-数据源(ABU量化使用文档).ipynb | bbfamily/abu | gpl-3.0 |
下面使用make_kl_df接口获取美股数据, 返回的数据即是通过上面实现的SNUSApi返回的: | ABuSymbolPd.make_kl_df('usSINA').tail() | abupy_lecture/19-数据源(ABU量化使用文档).ipynb | bbfamily/abu | gpl-3.0 |
6. 接入外部数据源,期货数据源
下面示例接入期货类型的数据源,首先实现一个数据源返回数据解析类,如下所示: | @AbuDataParseWrap()
class SNFuturesParser(object):
"""示例期货数据源解析类,被类装饰器AbuDataParseWrap装饰"""
# noinspection PyUnusedLocal
def __init__(self, symbol, json_dict):
"""
:param symbol: 请求的symbol str对象
:param json_dict: 请求返回的json数据
"""
data = json_dict
# 为AbuDataPars... | abupy_lecture/19-数据源(ABU量化使用文档).ipynb | bbfamily/abu | gpl-3.0 |
上面编写的SNFuturesParser与SNUSParser基本相同:被类装饰器AbuDataParseWrap装饰,实现__init__函数
如本例中每一条json数据的格式为, 所以通过序号解析对应的值:
['2017-08-08', '4295.000', '4358.000', '4281.000', '4345.000', '175570']
在编写数据解析类后就需要编写个数据源类,如下所示,以新浪期货数据源为例: | class SNFuturesApi(FuturesBaseMarket, SupportMixin):
"""sn futures数据源,支持国内期货"""
K_NET_BASE = "http://stock.finance.sina.com.cn/futures/api/json_v2.php/" \
"IndexService.getInnerFuturesDailyKLine?symbol=%s"
def __init__(self, symbol):
"""
:param symbol: Symbol类型对象
"... | abupy_lecture/19-数据源(ABU量化使用文档).ipynb | bbfamily/abu | gpl-3.0 |
上面编写的SNFuturesApi即为一个期货数据源类:
期货类型数据源类需要继承FuturesBaseMarket
__init__函数中指定数据源解析类
数据源类需要混入SupportMixin类,实现_support_market方法,声明支持的市场,本例只支持期货市场
数据源类需要实现kline接口,完成获取指定symbol的日线数据,将日线数据交给数据源解析类进行处理
下面示例使用,如下通过check_support检测是否支持美股,结果显示False: | SNFuturesApi(code_to_symbol('usSINA')).check_support(rs=False) | abupy_lecture/19-数据源(ABU量化使用文档).ipynb | bbfamily/abu | gpl-3.0 |
如下通过check_support检测是否支持期货,结果显示True: | SNFuturesApi(code_to_symbol('jd0')).check_support(rs=False) | abupy_lecture/19-数据源(ABU量化使用文档).ipynb | bbfamily/abu | gpl-3.0 |
接入SNFuturesApi至abupy系统中, 使用make_kl_df接口获取期货鸡蛋连续数据: | abupy.env.g_private_data_source = SNFuturesApi
ABuSymbolPd.make_kl_df('jd0').tail() | abupy_lecture/19-数据源(ABU量化使用文档).ipynb | bbfamily/abu | gpl-3.0 |
与期货市场类似的是美股期权市场,abupy同样支持美股期权市场的回测分析等操作,但由于暂时没有合适的可对外的数据源提供,所以暂时无示例,用户也可以在abupy中接入自己的美股期权数据源。
7. 接入外部数据源,比特币,莱特币数据源
下面示例接入币类市场数据源,首先实现一个数据源返回数据解析类,如下所示: | @AbuDataParseWrap()
class HBTCParser(object):
"""示例币类市场数据源解析类,被类装饰器AbuDataParseWrap装饰"""
def __init__(self, symbol, json_dict):
"""
:param symbol: 请求的symbol str对象
:param json_dict: 请求返回的json数据
"""
data = json_dict
# 为AbuDataParseWrap准备类必须的属性序列
if len(data)... | abupy_lecture/19-数据源(ABU量化使用文档).ipynb | bbfamily/abu | gpl-3.0 |
上面编写的HBTCParser与上面的数据解析类基本相同:被类装饰器AbuDataParseWrap装饰,实现__init__函数
如本例中每一条json数据的格式为, 所以通过序号解析对应的值:
['20170809000000000', 22588.08, 23149.99, 22250.0, 22730.0, 7425.5134]
所以需要使用ABuDateUtil.fmt_date将时间进行格式转化,下面编写对应的数据源类,如下所示,以火币数据源为例: | class HBApi(TCBaseMarket, SupportMixin):
"""hb数据源,支持币类,比特币,莱特币"""
K_NET_BASE = 'https://www.huobi.com/qt/staticmarket/%s_kline_100_json.js?length=%d'
def __init__(self, symbol):
"""
:param symbol: Symbol类型对象
"""
super(HBApi, self).__init__(symbol)
# 设置数据源解析对象类
... | abupy_lecture/19-数据源(ABU量化使用文档).ipynb | bbfamily/abu | gpl-3.0 |
上面编写的HBApi即为一个支持比特币,莱特币数据源类:
期货类型数据源类需要继承TCBaseMarket
__init__函数中指定数据源解析类
数据源类需要混入SupportMixin类,实现_support_market方法,声明支持的市场,本例只支持币类市场
数据源类需要实现kline接口,完成获取指定symbol的日线数据,将日线数据交给数据源解析类进行处理
数据源类需要实现分钟k线接口minute,也可raise NotImplementedError
下面示例使用,如下通过check_support检测是否支持美股,结果显示False: | HBApi(code_to_symbol('usSINA')).check_support(rs=False) | abupy_lecture/19-数据源(ABU量化使用文档).ipynb | bbfamily/abu | gpl-3.0 |
如下通过check_support检测是否支持比特币,结果显示True: | HBApi(code_to_symbol('btc')).check_support(rs=False) | abupy_lecture/19-数据源(ABU量化使用文档).ipynb | bbfamily/abu | gpl-3.0 |
接入HBApi至abupy系统中, 使用make_kl_df接口获取期货比特币数据: | abupy.env.g_private_data_source = HBApi
ABuSymbolPd.make_kl_df('btc').tail() | abupy_lecture/19-数据源(ABU量化使用文档).ipynb | bbfamily/abu | gpl-3.0 |
1.2.2 Vowel harmony (10 points)
Also handle vowel harmony. Write a function that traverses the tree manually (similarly to exercise 2.4 in the lab) and returns True or False, depending on whether the tree conforms to vowel harmony rules. Use this function in parse_tree (and parse) to filter invalid trees. | # Tests
assert parser.parse('legfinomabbak') == 'leg[/Supl]finom[/Adj]abb[_Comp/Adj]ak[Pl]'
assert parser.parse('legfinomabbek') == None | homeworks/homework3/homework3.ipynb | bmeaut/python_nlp_2017_fall | mit |
Exercise 2: Syntax (55 points)
In this exercise, you will parse a treebank, and induce a PCFG grammar from it. You will then implement a probabilistic version of the CKY algorithm, and evaluate the grammar on the test split of the treebank.
2.1 Parse a treebank (10 points)
Parse the treebank file en_lines-ud-train.s in... | from nltk.tree import Tree
def parse_treebank(treebank_file):
pass
# Tests
assert sum(1 for _ in parse_treebank('en_lines-ud-train.s')) == 2613
assert isinstance(next(parse_treebank('en_lines-ud-train.s')), Tree) | homeworks/homework3/homework3.ipynb | bmeaut/python_nlp_2017_fall | mit |
2.2 Filter trees (5 points)
In order to avoid problems further down the line, we shall only handle a subset of the trees in the treebank. We call a tree valid, if
- its root is 'S'
- the root has at least two children.
Write a function that returns True for "valid" trees and False for invalid ones. Filter the your gene... | def is_tree_valid(tree):
pass
# Tests
assert sum(map(is_tree_valid, parse_treebank('en_lines-ud-train.s'))) == 2311 | homeworks/homework3/homework3.ipynb | bmeaut/python_nlp_2017_fall | mit |
2.3 Induce the PCFG grammar (10 points)
Now that you have the trees, it is time to induce (train) a PCFG grammar for it! Luckily, nltk has a functions for just that: nltk.grammar.induce_pcfg. Use it to acquire your PCFG grammar. You can find hints at how to use it in the grammar module.
Note: since we want to parse sen... | def train_grammar(trees):
pass
def is_grammar_cnf(grammar):
for prod in grammar.productions():
rhs = prod.rhs()
if len(rhs) > 2 or (len(rhs) == 1 and isinstance(rhs[0], nltk.Nonterminal)):
return False
return True
# Tests
grammar = train_grammar(filter(is_tree_valid, pa... | homeworks/homework3/homework3.ipynb | bmeaut/python_nlp_2017_fall | mit |
2.4 Implement PCKY (15 points)
Implement the PCKY algorithm. Encapsulate it in a class called PCKYParser. Extend your CKYParser solution from the lab so that it creates trees with probabilities (ProbabilisticTree). The parse() method should also accept a parameter n, and only return the most probable n trees (as a gene... | import os
class FomaRegex:
pass
# Tests
assert FomaRegex.convert('ab?c*d+') == '{a}{b}^<2{c}*{d}+'
assert FomaRegex.convert('a.b') == '{a}?{b}'
assert FomaRegex.convert('a+(bc|de).*') == '{a}+[{bc}|{de}]?*'
with FomaRegex('a.b') as fr:
assert fr.pattern == '{a}?{b}', 'Invalid pattern'
assert fr.fsa_file ... | homeworks/homework3/homework3.ipynb | bmeaut/python_nlp_2017_fall | mit |
3.2 Application* (5 points)
Add a match method to the class that runs the regex against the specified string. It should return True or False depending on whether the regex matched the string.
Note: obviously you should use your FSA file and foma, not the re module. :) | # Tests
with FomaRegex('a*(bc|de).+') as fr:
assert fr.match('aabcd') is True
assert fr.match('ade') is False | homeworks/homework3/homework3.ipynb | bmeaut/python_nlp_2017_fall | mit |
3.3 Multiple regexes (5 points)
Make sure not all FomaRegex objects use the same FSA file. | # Tests
with FomaRegex('a') as a, FomaRegex('b') as b:
assert a.fsa_file != b.fsa_file | homeworks/homework3/homework3.ipynb | bmeaut/python_nlp_2017_fall | mit |
Example of Cell Magic Using 3 '%%%' | %%%timeit
walker = RandomWalker()
walk = [position for position in walker.walk(10000)] | Numpy Tutorial.ipynb | 211217613/python_meetup | unlicense |
Line magic uses 1 '%'
Example Using Functional Programming
Remove class definition | def random_walk_f(n):
position = 0
walk = [position]
for i in range(n):
position = 2 * random.randint(0,1) - 1
walk.append(position)
return walk
%%%timeit
walk = random_walk_f(10000) | Numpy Tutorial.ipynb | 211217613/python_meetup | unlicense |
small improvement in time
Vectorized Approach Like When You Did Things in MATLAB :(
Get rid of the loop | from itertools import accumulate
def random_walker_v(n):
steps = random.sample([1, -1] * n, n)
return list(accumulate(steps))
%%%timeit
walk = random_walker_v(10000) | Numpy Tutorial.ipynb | 211217613/python_meetup | unlicense |
WOW 2x as fast
Numpy ifying | import numpy as np
def random_walker_np(n):
steps = 2 * np.random.randint(0, 2, size=n) - 1
return np.cumsum(steps)
%%%timeit
walk = random_walker_np(10000) | Numpy Tutorial.ipynb | 211217613/python_meetup | unlicense |
Getting Started with Basic Numpy Array
Create an array
Clobber the namespace so we dont have to np.<name> | import numpy as np | Numpy Tutorial.ipynb | 211217613/python_meetup | unlicense |
Create an np array. You can pass any type of python seq: list, tuples, etc | a = np.array([0,1,2,3,4,5])
a | Numpy Tutorial.ipynb | 211217613/python_meetup | unlicense |
Multidimensional array using list of lists | m = np.array([[1,2,3], [4,5,6]])
m.shape
ad = a.data
list(ad)
# what type is a
type(a)
# what is the numerica type of the elements in the array
a.dtype
# What shape (dimensions) is the array
a.shape
# Bytes per element. 32bit integers should be 4 bytes
a.itemsize
# Total size in bytes of the array
a.nbytes
# Be... | Numpy Tutorial.ipynb | 211217613/python_meetup | unlicense |
Reshape and Resize
Operations | # Element wise addition
# Element wise subtraction | Numpy Tutorial.ipynb | 211217613/python_meetup | unlicense |
Do Some Vector Math Not for Loop Math | %%%timeit
dy = y[1:] - y[:-1] | Numpy Tutorial.ipynb | 211217613/python_meetup | unlicense |
%%capture <varname> captures the result of the operation into a var | %%capture timeit_result
%timeit python_list1 = range(1,1000)
%timeit python_list2 = np.arange(1,1000)
print(timeit_result) | Numpy Tutorial.ipynb | 211217613/python_meetup | unlicense |
Statistical Analysis | data_set = random.random((2,3))
print(data_set)
# example of namespace....cant access np.max and builtin max is being used
max(data_set[0])
| Numpy Tutorial.ipynb | 211217613/python_meetup | unlicense |
Then, we need to go through all the rows in the file, and for each add the RecombinantFraction to the right Line and InfectionStatus. To do so, we need to choose a data structure. Here we use a dictionary, where the keys are given by Line, and each value of the dictionary is another dictionary where the keys W and I in... | my_data = {}
with open('../data/Singh2015_data.csv') as csvfile:
reader = csv.DictReader(csvfile)
for row in reader:
my_line = row['Line']
my_status = row['InfectionStatus']
my_recomb = float(row['RecombinantFraction'])
# Test by printing the values
print(my_line, my_sta... | python/solutions/Singh2015_solution.ipynb | StefanoAllesina/ISC | gpl-2.0 |
Train
I found some help with parameters here:
* https://github.com/JohnLangford/vowpal_wabbit/wiki/Tutorial
* https://github.com/JohnLangford/vowpal_wabbit/wiki/Command-line-arguments
--cache_file train.cache
converts train_ALL.vw to a binary file for future faster processing.
Next time we go through the model bu... | !rm train_logmulti.vw.cache
!rm mnist_train_logmulti.model
!vw -d data/mnist_train.vw -b 19 --ect 10 -f mnist_train_logmulti.model -q ii --passes 100 -l 0.4 --early_terminate 3 --cache_file train_logmulti.vw.cache --power_t 0.6 | vw/VW_benchmark_log_multi.ipynb | grfiv/MNIST | mit |
Predict
-t
is for test file
-i
specifies the model file created earlier
-p
where to store the class predictions [1,10] | !rm predict_logmulti.txt
!vw -t data/mnist_test.vw -i mnist_train_logmulti.model -p predict_logmulti.txt | vw/VW_benchmark_log_multi.ipynb | grfiv/MNIST | mit |
Analyze | y_true=[]
with open("data/mnist_test.vw", 'rb') as f:
for line in f:
m = re.search('^\d+', line)
if m:
found = m.group()
y_true.append(int(found))
y_pred = []
with open("predict_logmulti.txt", 'rb') as f:
for line in f:
m = re.search('^\d+', line)
if m:
... | vw/VW_benchmark_log_multi.ipynb | grfiv/MNIST | mit |
Por supuesto, voy a utilizar los datos de casos de COVID-19 en Uruguay. No los tengo completos completos, pero gracias a la gente de GUIAD-Covid-19, me puedo acercar. De esos datos, para más magia, voy a usar solamente dos atributos: el día desde el que empezamos a medir, y la cantidad de casos. | covid=pd.read_csv('https://raw.githubusercontent.com/natydasilva/COVID19-UDELAR/master/Datos/Datos_Nacionales/estadisticasUY.csv?token=ABCA7RFDBSMT4PMGJMNCXQS6RUVRA')
covid
data=covid.loc[3:,['dia', 'acumTestPositivos']]
data | src/Sobreajustando.ipynb | gmonce/datascience | gpl-3.0 |
Y voy a utilizar una función polinomial de orden 5, para ver si podemos ajustar a los datos y encontrar un patrón. Este procedimiento se llama regresión, y es una de las herramientas de la Inteligencia Artificial. | degree=5
x=data['dia']
x_plot=np.linspace(4,19,20)
y=data['acumTestPositivos']
X=x[:,np.newaxis]
X_plot=x_plot[:,np.newaxis]
plt.scatter(x,y, color='cornflowerblue', linewidth=2,
label="ground truth")
model = make_pipeline(PolynomialFeatures(degree), Ridge())
model.fit(X, y)
y_plot = model.predict(X_plot)
pl... | src/Sobreajustando.ipynb | gmonce/datascience | gpl-3.0 |
Por increíble que parezca, hemos encontrado una función que ajusta casi perfectamente a los casos que se han confirmado como positivos en Uruguay. Y ahora, el toque final: utilicemos esta función para predecir cuántos casos habrá en 10 días, de continuar con este ritmo. | x_plot=x_plot=np.linspace(0,30,30)
plt.scatter(x,y, color='cornflowerblue', linewidth=2,
label="ground truth")
X_plot=x_plot[:,np.newaxis]
y_plot = model.predict(X_plot)
plt.plot(x_plot, y_plot, color='teal', linewidth=2,
label="degree %d" % degree)
plt.title("Predicción:Casos positivos de COVI... | src/Sobreajustando.ipynb | gmonce/datascience | gpl-3.0 |
Import Python packages
Execute the command below (Shift + Enter) to load all the python libraries we'll need for the lab. | import datetime
import pickle
import os
import pandas as pd
import xgboost as xgb
import numpy as np
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import FeatureUnion, make_pipeline
from sklearn.utils import shuffle
from sklearn.base import clone
from sklearn.model_selection import train_test... | quests/dei/census/income_xgboost.ipynb | turbomanage/training-data-analyst | apache-2.0 |
Download and process data
The models you'll build will predict the income level, whether it's less than or equal to $50,000 per year, of individuals given 14 data points about each individual. You'll train your models on this UCI Census Income Dataset.
We'll read the data into a Pandas DataFrame to see what we'll be wo... | train_csv_path = 'https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data'
COLUMNS = (
'age',
'workclass',
'fnlwgt',
'education',
'education-num',
'marital-status',
'occ|upation',
'relationship',
'race',
'sex',
'capital-gain',
'capital-loss',
'hour... | quests/dei/census/income_xgboost.ipynb | turbomanage/training-data-analyst | apache-2.0 |
Now you're ready to build and train your first model!
Build a First Model
The model we build closely follows a template for the census dataset found on AI Hub. For our model we use an XGBoost classifier. However, before we train our model we have to pre-process the data a little bit. We build a processing pipeline usin... | numerical_indices = [0, 12]
categorical_indices = [1, 3, 5, 7]
p1 = make_pipeline(
custom_transforms.PositionalSelector(categorical_indices),
custom_transforms.StripString(),
custom_transforms.SimpleOneHotEncoder()
)
p2 = make_pipeline(
custom_transforms.PositionalSelector(numerical_indices),
S... | quests/dei/census/income_xgboost.ipynb | turbomanage/training-data-analyst | apache-2.0 |
Now it's time to deploy the model. We can do that with this gcloud command: | %%bash
MODEL_NAME="census_income_classifier"
VERSION_NAME="original"
MODEL_DIR="gs://$QWIKLABS_PROJECT_ID/original/"
CUSTOM_CODE_PATH="gs://$QWIKLABS_PROJECT_ID/custom_transforms-0.1.tar.gz"
gcloud beta ai-platform versions create $VERSION_NAME \
--model $MODEL_NAME \
--runtime-version 1.15 \
--python-version 3... | quests/dei/census/income_xgboost.ipynb | turbomanage/training-data-analyst | apache-2.0 |
Test your model by running this code: | !gcloud ai-platform predict --model=census_income_classifier --json-instances=predictions.json --version=original | quests/dei/census/income_xgboost.ipynb | turbomanage/training-data-analyst | apache-2.0 |
Deploy the model to AI Platform using the following bash script: | %%bash
gsutil cp model.pkl gs://$QWIKLABS_PROJECT_ID/balanced/
MODEL_NAME="census_income_classifier"
VERSION_NAME="balanced"
MODEL_DIR="gs://$QWIKLABS_PROJECT_ID/balanced/"
CUSTOM_CODE_PATH="gs://$QWIKLABS_PROJECT_ID/custom_transforms-0.1.tar.gz"
gcloud beta ai-platform versions create $VERSION_NAME \
--model ... | quests/dei/census/income_xgboost.ipynb | turbomanage/training-data-analyst | apache-2.0 |
We can make pretty graphs | import matplotlib.pyplot as plt
import math
import numpy as np
%matplotlib inline
t = np.arange(0., 5., 0.2)
plt.plot(t, t, 'r--', t, t**2, 'bs') | Intro to Jupyter Notebooks.ipynb | 211217613/python_meetup | unlicense |
Now we make a relatively complex matplotlib figure using subplot functions. | fig = plt.figure(figsize=(4,4))
ax1 = plt.subplot2grid((3,3), (0,0), colspan=3)
ax2 = plt.subplot2grid((3,3), (1,0), colspan=2)
ax3 = plt.subplot2grid((3,3), (1, 2), rowspan=2)
ax4 = plt.subplot2grid((3,3), (2, 0))
ax5 = plt.subplot2grid((3,3), (2, 1)) | examples/mpl_to_svg_layout/mpl_fig_to_figurefirst_svg.ipynb | FlyRanch/figurefirst | mit |
Now import figurefirst, and with a single function call we generate an svg file ('test_mpl_conversion.svg'), which is saved to disk, and automatically loaded as a FigureFirst layout object, ready to go. | import figurefirst as fifi
reload(fifi.mpl_fig_to_figurefirst_svg)
layout = fifi.mpl_fig_to_figurefirst_svg.mpl_fig_to_figurefirst_svg(fig, 'test_mpl_conversion.svg')
# Now lets look at the SVG file (and close the automatically displayed matplotlib figure)
plt.close()
from IPython.display import display,SVG
display(SV... | examples/mpl_to_svg_layout/mpl_fig_to_figurefirst_svg.ipynb | FlyRanch/figurefirst | mit |
You can now open the svg file in an svg editor, like inkscape, make adjustments to the rectangles, and reload the file using the standard FigureFirst API.
Try it, and see the resulting changes below. | layout = fifi.svg_to_axes.FigureLayout('test_mpl_conversion.svg', make_mplfigures=True) | examples/mpl_to_svg_layout/mpl_fig_to_figurefirst_svg.ipynb | FlyRanch/figurefirst | mit |
Using Matplotlib and FigureFirst layouts together
Suppose you would like to make a figure with 3 panels, where each panel contains a grid of axes that are easier to generate using Matplotlib functions than they are to draw. Or maybe you want the layout of these axes to be controlled based on the data. This can be accom... | fig = plt.figure(figsize=(4,4))
ax1 = plt.subplot2grid((3,3), (0,0), colspan=3)
ax2 = plt.subplot2grid((3,3), (1,0), colspan=2)
ax3 = plt.subplot2grid((3,3), (1, 2), rowspan=2)
ax4 = plt.subplot2grid((3,3), (2, 0))
ax5 = plt.subplot2grid((3,3), (2, 1)) | examples/mpl_to_svg_layout/mpl_fig_to_figurefirst_svg.ipynb | FlyRanch/figurefirst | mit |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.