Request

#1
by Clemylia - opened

Hello. Most of the time I make my models from scratch.
However, for the majority of my largest models (in billions of parameters), and which can maintain a general conversation I prefer to do very personal and targeted fine-tuning 📚
(The majority of my LLMs created from scratch are small, like under 100 million parameters).

How about fine-tuning one of your models? We could collaborate :)

I can do fine-tuning on your models.
You do fine-tuning on mine, and we do cascading fine-tuning.

Does that sound good to you?

I have a French model I created from scratch that might interest you. It needs a lot of fine-tuning to become efficient and stop generating grammatical errors.
It has 0.7 b of parameters. You could try refining it on a French corpus (if you have one).
And even customized to your needs.

For colab or request for work you can contact me in X / Twitter . Thank you

Sign up or log in to comment