| | --- |
| | base_model: tiiuae/Falcon3-10B-Instruct |
| | tags: |
| | - fluently-lm |
| | - fluently-sets |
| | - demo |
| | - reasoning |
| | - thinking |
| | - text-generation-inference |
| | - transformers |
| | - unsloth |
| | - falcon3 |
| | - falcon |
| | - llama |
| | - trl |
| | - sft |
| | license: apache-2.0 |
| | language: |
| | - en |
| | datasets: |
| | - fluently-sets/ultrathink |
| | pipeline_tag: text-generation |
| | --- |
| | # FalconThink3-10B Demo (Finetune of Falcon3-10B-IT on Ultrathink dataset) |
| |
|
| | ***Q4_K_M GGUF-quant available [here](https://huggingface.co/fluently-sets/FalconThink3-10B-IT-Q4_K_M-GGUF)*** |
| |
|
| | This is SFT-finetune Falcon3-10B-IT on Ultrathink dataset. This is far from a perfect model, its main purpose is to show an example of using the dataset. |
| |
|
| | - **Base model**: [tiiuae/Falcon3-10B-Instruct](https://huggingface.co/tiiuae/Falcon3-10B-Instruct) |
| | - **Model type**: [LlamaForCausalLM](https://huggingface.co/models?other=llama) |
| | - **Number of parameters**: 10.3B |
| | - **Precision**: FP16 |
| | - **Training method**: SFT |
| | - **Training dataset**: [fluently-sets/ultrathink](https://huggingface.co/datasets/fluently-sets/ultrathink) |
| | - **Languages**: English (mostly) |
| |
|
| | *Trained by Fluently Team ([@ehristoforu](https://huggingface.co/ehristoforu)) with [Unsloth AI](https://github.com/unslothai/unsloth) with love🥰* |
| |
|
| | [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |