YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
Quantization made by Richard Erkhov.
llama3-wordcel - bnb 4bits
- Model creator: https://huggingface.co/jspr/
- Original model: https://huggingface.co/jspr/llama3-wordcel/
Original model description:
language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl - sft base_model: meta-llama/Meta-Llama-3-8B datasets: - teknium/OpenHermes-2.5 - grimulkan/theory-of-mind - grimulkan/physical-reasoning
Llama3 8B Wordcel
Wordcel is a Llama3 fine-tune intended to be used as a mid-training checkpoint for more specific RP/storywriting/creative applications.
It has been trained from Llama3 8B Base on a composite dataset of ~100M tokens that highlights reasoning, (uncensored) stories, classic literature, and assorted interpersonal intelligence tasks.
Components of the composite dataset include OpenHermes-2.5, and Grimulkan's Theory of Mind and Physical Reasoning datasets.
It is trained at a context length of 32k tokens, using linear RoPE scaling with a factor of 4.0. Derivative models should be capable of generalizing to 32k tokens as a result.
If you train a model using this checkpoint, please give clear attribution! The Llama 3 base license likely applies.
- Downloads last month
- -