another model, again
Well, my discussion before wasnt answered but i alredy have a new model LOL, just the usual small model.
https://huggingface.co/simonko912/oasst2-llama
https://huggingface.co/mradermacher/oasst2-llama-GGUF
lol it was already quanted before I queued it
do my models get auto queued now lol
I dont know, I guess someone did that lol, we dont have autoqueue yet. I can technically do that but Im a bit too lazy
I dont know, I guess someone did that lol, we dont have autoqueue yet. I can technically do that but Im a bit too lazy
I basically need a gguf version for every single model I make so someone saw that I made another model and queued it
well, perhaps nico or mradermacher. You have my discord, so when you make a model just let me know so I can queue it
well, perhaps nico or mradermacher. You have my discord, so when you make a model just let me know so I can queue it
Currently I'm fixing my oasst1 model since each different line displays like this for example:
User: hello, what's 10+9?
Assistant: 10+9 is easy, just add the 2 numbers together.
Assistant: 10+9 is 19
(this is the example of my cli decoding the jsonl I have, I think I should use the newline char or what it is for a new line.)
\n is newline probably
It could be a issue with my run script only, and maybe it is formatted correctly, I need to check with LM studio but I'm currently not going to be home, thanks to Nico's container i can format the training data and train there, if you want the code I have I can send it to u (I have a parquet to jsonl converter lol)
well, I think your issue might be because you forgot to train an end token or something like this. I dont really understand what is wrong with the output except that there are 2 assisntant.
well, I think your issue might be because you forgot to train an end token or something like this. I dont really understand what is wrong with the output except that there are 2 assisntant.
It could be a issue with my code, I should revise it, I can show it to u too