--- title: README emoji: 🚀 colorFrom: red colorTo: purple sdk: static pinned: false --- Evaluations and more information about the training for every Gerbil model and the mixture-of-tasks Blender pretraining method inspired by UL2 can be found here: https://github.com/aicrumb/notebook-hosting/blob/main/GerbilLabEvaluations.md Special tokens for "Blender" models' pretraining include: ``` '', '', '', '', '', '', '' # Example fill in the middle ' this is an for fill-in-the-middle example text <|endoftext|>' # Example causal language modelling ' this is an example text for causal language modelling <|endoftext|>' # Example masked language modelling ' this is an text for masked language modelling example <|endoftext|>' ```