Introduction
Transfer learning is an area of research that uses knowledge from pretrained models to transfer to new tasks. In recent years, Transformer-based models like BERT [@devlin-etal-2019-bert] and T5 [@2019t5] have yielded state-of-the-art results on the lion's share of benchmark tasks for language understanding through pretraining and transfer, often paired with some form of multitask learning.
jiant enables a variety of complex training pipelines through simple configuration changes, including multi-task training [@Caruana1993MultitaskLA; @liu-etal-2019-multi] and pretraining, as well as the sequential fine-tuning approach from STILTs [@Phang2018SentenceEO]. In STILTs, intermediate task training takes a pretrained model like ELMo or BERT, and applies supplementary training on a set of intermediate tasks, before finally performing single-task training on additional downstream tasks.
jiant can be cloned and installed from GitHub: https://github.com/nyu-mll/jiant. jiant v1.3.0 requires Python 3.5 or later, and jiant's core dependencies are PyTorch [@NEURIPS2019_9015], AllenNLP [@Gardner2017AllenNLP], and HuggingFace's Transformers [@Wolf2019HuggingFacesTS]. jiant is released under the MIT License [@osi2020]. jiant runs on consumer-grade hardware or in cluster environments with or without CUDA GPUs. The jiant repository also contains documentation and configuration files demonstrating how to deploy jiant in Kubernetes clusters on Google Kubernetes Engine.
Tasks: Tasks have references to task data, methods for processing data, references to classifier heads, and methods for calculating performance metrics, and making predictions.
Sentence Encoder: Sentence encoders map from the indexed examples to a sentence-level representation. Sentence encoders can include an input module (e.g., Transformer models, ELMo, or word embeddings), followed by an optional second layer of encoding (usually a BiLSTM). Examples of possible sentence encoder configurations include BERT, ELMo followed by a BiLSTM, BERT with a variety of pooling and aggregation methods, or a bag of words model.
Task-Specific Output Heads: Task-specific output modules map representations from sentence encoders to outputs specific to a task, e.g. entailment/neutral/contradiction for NLI tasks, or tags for part-of-speech tagging. They also include logic for computing the corresponding loss for training (e.g. cross-entropy).
Trainer: Trainers manage the control flow for the training and validation loop for experiments. They sample batches from one or more tasks, perform forward and backward passes, calculate training metrics, evaluate on a validation set, and save checkpoints. Users can specify experiment-specific parameters such as learning rate, batch size, and more.
Config: Config files or flags are defined in HOCON[^3] format. Configs specify parameters for
jiantexperiments including choices of tasks, sentence encoder, and training routine.[^4]
Configs are jiant's primary user interface. Tasks and modeling components are designed to be modular, while jiant's pipeline is a monolithic, configuration-driven design intended to facilitate a number of common workflows outlined in 3.3{reference-type="ref" reference="pipeline"}.
{#fig:flow width="\textwidth"}
jiant's core pipeline consists of the five stages described below and illustrated in Figure 2{reference-type="ref" reference="fig:flow"}:
A config or multiple configs defining an experiment are interpreted. Users can choose and configure models, tasks, and stages of training and evaluation.
The tasks and sentence encoder are prepared:
The task data is loaded, tokenized, and indexed, and the preprocessed task objects are serialized and cached. In this process, AllenNLP is used to create the vocabulary and index the tokenized data.
The sentence encoder is constructed and (optionally) pretrained weights are loaded.[^5]
The task-specific output heads are created for each task, and task heads are attached to a common sentence encoder. Optionally, different tasks can share the same output head, as in @liu-etal-2019-multi.
Optionally, in the intermediate phase the trainer samples batches randomly from one or more tasks,[^6] and trains the shared model.
Optionally, in the target training phase, a copy of the model is configured and trained or fine-tuned for each target task separately.
Optionally, the model is evaluated on the validation and/or test sets of the target tasks.
jiant supports over 50 tasks. Task types include classification, regression, sequence generation, tagging, masked language modeling, and span prediction. jiant focuses on NLU tasks like MNLI [@N18-1101], CommonsenseQA [@talmor2018commonsenseqa], the Winograd Schema Challenge [@wsc], and SQuAD [@squad]. A full inventory of tasks and task variants is available in the jiant/tasks module.
jiant provides support for cutting-edge sentence encoder models, including support for Huggingface's Transformers. Supported models include: ELMo [@peters-etal-2018-deep], GPT [@radford2018improving], BERT [@devlin-etal-2019-bert], XLM [@NIPS20198928], GPT-2 [@radford2019language], XLNet [@yang2019xlnet], RoBERTa [@liu2019roberta], and ALBERT [@lan2019albert]. jiant also supports the from-scratch training of (bidirectional) LSTMs [@hochreiter1997long] and deep bag of words models [@iyyer-etal-2015-deep], as well as syntax-aware models such as PRPN [@DBLP:conf/iclr/ShenLHC18] and ON-LSTM [@shen2018ordered]. jiant also supports word embeddings such as GloVe [@pennington-etal-2014-GloVe].
// Config for BERT experiments.
// Get default configs from a file:
include "defaults.conf"
exp_name = "bert-large-cased"
// Data and preprocessing settings
max_seq_len = 256
// Model settings
input_module = "bert-large-cased"
transformers_output_mode = "top"
s2s = {
attention = none
}
sent_enc = "none"
sep_embs_for_skip = 1
classifier = log_reg
// fine-tune entire BERT model
transfer_paradigm = finetune
// Training settings
dropout = 0.1
optimizer = bert_adam
batch_size = 4
max_epochs = 10
lr = .00001
min_lr = .0000001
lr_patience = 4
patience = 20
max_vals = 10000
// Phase configuration
do_pretrain = 1
do_target_task_training = 1
do_full_eval = 1
write_preds = "val,test"
write_strict_glue_format = 1
// Task specific configuration
commitbank = {
val_interval = 60
max_epochs = 40
}jiant experiment config file.jiant experiments can be run with a simple CLI:
python -m jiant \
--config_file roberta_with_mnli.conf \
--overrides "target_tasks = swag, \
run_name = swag_01"
jiant provides default config files that allow running many experiments without modifying source code.
jiant also provides baseline config files that can serve as a starting point for model development and evaluation against GLUE [@wang2018glue] and SuperGLUE [@wang2019superglue] benchmarks.
More advanced configurations can be developed by composing multiple configurations files and overrides. Figure 3{reference-type="ref" reference="tab:config"} shows a config file that overrides a default config, defining an experiment that uses BERT as the sentence encoder. This config includes an example of a task-specific configuration, which can be overridden in another config file or via a command line override.
Because jiant implements the option to provide command line overrides with a flag, it is easy to write scripts that launch jiant experiments over a range of parameters, for example while performing grid search across hyperparameters. jiant users have successfully run large-scale experiments launching hundreds of runs on both Kubernetes and Slurm.
Here we highlight some example use cases and key corresponding jiant config options required in these experiments:
Fine-tune BERT on SWAG [@zellers-etal-2018-swag] and SQUAD [@squad], then fine-tune on HellaSwag [@zellers-etal-2019-hellaswag]:
input_module = bert-base-cased pretrain_tasks = "swag,squad" target_tasks = hellaswagTrain a probing classifier over a frozen BERT model, as in @tenney2019bert:
input_module = bert-base-cased target_tasks = edges-dpr transfer_paradigm = frozenCompare performance of GloVe [@pennington-etal-2014-GloVe] embeddings using a BiLSTM:
input_module = glove sent_enc = rnnEvaluate ALBERT [@lan2019albert] on the MNLI [@N18-1101] task:
input_module = albert-large-v2 target_task = mnli
jiant implements features that improve run stability and efficiency:
jiantimplements checkpointing options designed to offer efficient early stopping and to show consistent behavior when restarting after an interruption.jiantcaches preprocessed task data to speed up reuse across experiments which share common data resources and artifacts.jiantimplements gradient accumulation and multi-GPU, which enables training on larger batches than can fit in memory for a single GPU.jiantsupports outputting predictions in a format ready for GLUE and SuperGLUE benchmark submission.jiantgenerates custom log files that capture experimental configurations, training and evaluation metrics, and relevant run-time information.jiantgenerates TensorBoard event files [@tensorflow2015-whitepaper] for training and evaluation metric tracking. TensorBoard event files can be visualized using the TensorBoard Scalars Dashboard.
jiant's design offers conveniences that reduce the need to modify code when making changes:
jiant's task registry makes it easy to define a new version of an existing task using different data. Once the new task is defined in the task registry, the task is available as an option injiant's config.jiant's sentence encoder and task output head abstractions allow for easy support of new sentence encoders.
In use cases requiring the introduction of a new task, users can use class inheritance to build on a number of available parent task types including classification, tagging, span prediction, span classification, sequence generation, regression, ranking, and multiple choice task classes. For these task types, corresponding task-specific output heads are already implemented.
More than 30 researchers and developers from more than 5 institutions have contributed code to the jiant project.[^7] jiant's maintainers welcome pull requests that introduce new tasks or sentence encoder components, and pull request are actively reviewed. The jiant repository's continuous integration system requires that all pull requests pass unit and integration tests and meet Black[^8] code formatting requirements.
While jiant is quite flexible in the pipelines that can be specified through configs, and some components are highly modular (e.g., tasks, sentence encoders, and output heads), modification of the pipeline code can be difficult. For example, training in more than two phases would require modifying the trainer code.[^9] Making multi-stage training configurations more flexible is on jiant's development roadmap.
jiant's development roadmap prioritizes adding support for new Transformer models, and adding tasks that are commonly used for pretraining and evaluation in NLU. Additionally, there are plans to make jiant's training phase configuration options more flexible to allow training in more than two phases, and to continue to refactor jiant's code to keep jiant flexible to track developments in NLU research.