Mixture-of-LoRAs: An Efficient Multitask Tuning for Large Language Models
Paper • 2403.03432 • Published • 1
Mixture-of-LoRAs: An Efficient Multitask Tuning for Large Language Models
https://arxiv.org/pdf/2403.03432
We evenly sample about 10k training data and 2k validation data on each dataset.
From laion/OIG was taken only:
from adapters import AutoAdapterModel model = AutoAdapterModel.from_pretrained("roberta-base") model.load_adapter("evilfreelancer/moa-classification", set_active=True)