File size: 209 Bytes
e10a2fe
 
 
 
1
2
3
4

This is a LoRa finetuning of Bloom-7b1 using the Alpaca instruction dataset.

It really highlights how the Bloom models are undertrained with ~400M tokens as opposed to 1 Trillion in the smaller LLaMa models.