| | ---
|
| | library_name: transformers
|
| | license: apache-2.0
|
| | datasets:
|
| | - llm-jp/oasst2-33k-ja
|
| | language:
|
| | - zho
|
| | - eng
|
| | - fra
|
| | - spa
|
| | - por
|
| | - deu
|
| | - ita
|
| | - rus
|
| | - jpn
|
| | - kor
|
| | - vie
|
| | - tha
|
| | - ara
|
| | base_model:
|
| | - Qwen/Qwen2.5-7B
|
| | inference: false
|
| | ---
|
| |
|
| | # Take-7B
|
| |
|
| | ## Description
|
| | Take-7B is a model that was instruction-tuned on the oasst2, using Qwen2.5-7B as its base model.
|
| |
|
| | ## Series
|
| | | Variant | Link |
|
| | | --- | --- |
|
| | | Malum-230 | [Manual-Dataset-Creation-Project/Malum-230](https://huggingface.co/datasets/Manual-Dataset-Creation-Project/Malum-230) |
|
| | | Matsu-7B | [Manual-Dataset-Creation-Project/Matsu-7B](https://huggingface.co/Manual-Dataset-Creation-Project/Matsu-7B) |
|
| |
|
| | ## Contributors
|
| | - [Sudy](https://huggingface.co/sudy-super)
|
| | - [ほーりーふぉっくす](https://huggingface.co/Holy-fox)
|
| |
|
| | ## Acknowledgments
|
| | We would like to express our gratitude to [VOLTMIND](https://voltmind.jp/) for providing the computational resources used to train this model. |