The path of the configuration file is as follows:
config/*
LibContinual Configuration File Composition
LibContinual configuration files use the yaml file format. Our predefined configuration files are located in core/config/default.yaml, and users can put custom configuration items into the config/ directory and save them in .yaml format.
Although most configurations have been pre-written in default.yaml, you cannot directly use the default.yaml configuration to run the framework. You need to define the configuration file corresponding to the method you want to run in advance. You can refer to the parameter descriptions below to write your own configuration file.
The config/headers folder contains the following files:
data.yaml: Definitions related to data configuration are in this filedevice.yaml: Definitions related to GPU configuration items are in this filemodel.yaml: Definitions related to model configuration are in this fileoptimizer.yaml: Definitions related to the optimizer configuration are in this file
LibContinual Configuration Settings
Data Settings
data_root: The storage path of the datasetimage_size: The size of the input imagepin_memory: Whether to use memory to speed up readingworkers: The number of processes for parallel data reading
data_root: /data/cifar10/
image_size: 32
Model Settings
backbone: Backbone network information used in this method
name: The name of the backbone network, which needs to correspond with the implementation in the LibContinual frameworkkwargs: Parameters required by the backbone network, which need to be consistent with the naming in the codenum_classes: The total number of classes needed to be classified by the modelargs: Other required parametersdataset: The dataset being used, as different datasets have different backbone network implementation details
backbone: name: resnet18 kwargs: num_classes: 10 args: dataset: cifar10
classifier: Classifier information used in the method
name: The name of the classifier, which needs to be consistent with the method implementation in LibContinualkwargs: Initialization parameters of the classifier, which need to be consistent with the names in the code implementationclassifier: name: PASS kwargs: num_class: 100 feat_dim: 512 # The following are method-related hyperparameters feat_KD: 10.0 proto_aug: 10.0 temp: 0.1
Training Settings
init_cls_num: The number of training classes for the first taskinc_cls_num: The number of training classes for subsequent incremental taskstask_num: The total number of tasksinit_epoch: The number of training epochs for the first taskepoch: The number of training epochs for incremental tasksval_per_epoch: How many epochs to test performance on the test setbatch_size: Batch size during trainingwarm_up: The number of warm-up epochs before training
warmup: 0
init_cls_num: 50
inc_cls_num: 10
task_num: 6
batch_size: 64
init_epoch: 100
epoch: 100
val_per_epoch: 10
Optimizer Settings
optimizer: Information about the optimizer used in trainingname: The name of the optimizer, only supports optimizers built intoPytorchkwargs: Parameters used by this optimizer, parameter names need to match the parameter names in Pytorch optimizers, for examplelr: Learning rate of the optimizerweight_decay: Weight decay
optimizer:
name: Adam
kwargs:
lr: 0.001
weight_decay: 0.0002
lr_scheduler: Learning rate adjustment strategy used in training, only supports adjustment strategies built into Pytorch
name: The name of the learning rate adjustment strategykwargs: Parameters of the learning rate adjustment strategy, note that different learning rate adjustment strategies will have different parameters
lr_scheduler:
name: StepLR
kwargs:
step_size: 45
gamma: 0.1
Hardware Settings
device_ids: GPU IDs usedn_gpu: The number of GPUs used in parallel during training, if it is1, it means parallel training is not useddeterministic: Whether to enabletorch.backend.cudnn.benchmark
device_ids: 3
n_gpu: 1
seed: 0
deterministic: False