Uploaded model
- Developed by: xensive
- License: apache-2.0
- Finetuned from model : unsloth/llama-3.2-3b-unsloth-bnb-4bit
Datasets Used
- 1. https://huggingface.co/datasets/mlabonne/FineTome-100k
- 2. https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered
Updates
This version now prints the tokenizer bs I was talking about in my previous version, it could be due to the dataset or how I've prepared the data. Regardless, multi-turn datasets were too much for me to handle anyways.
model does not refuse to anything yet.
To Do
- Scrap or finetune this, not sure anyways
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support