| | --- |
| | library_name: transformers |
| | license: apache-2.0 |
| | datasets: |
| | - Manual-Dataset-Creation-Project/Malum-230 |
| | - llm-jp/oasst2-33k-ja |
| | language: |
| | - ja |
| | base_model: |
| | - Qwen/Qwen2.5-7B |
| | inference: false |
| | --- |
| | |
| | # Matsu-7B |
| |
|
| | ## Description |
| | Matsu-7B is a model that was instruction-tuned on the oasst2 and Malum-230, using Qwen2.5-7B as its base model. |
| |
|
| | ## Series |
| | | Variant | Link | |
| | | --- | --- | |
| | | Malum-230 | [Manual-Dataset-Creation-Project/Malum-230](https://huggingface.co/datasets/Manual-Dataset-Creation-Project/Malum-230) | |
| | | Take-7B | [Manual-Dataset-Creation-Project/Take-7B](https://huggingface.co/Manual-Dataset-Creation-Project/Take-7B) | |
| |
|
| | ## Contributors |
| | - [Sudy](https://huggingface.co/sudy-super) |
| | - [ほーりーふぉっくす](https://huggingface.co/Holy-fox) |
| |
|
| | ## Acknowledgments |
| | We would like to express our gratitude to [VOLTMIND](https://voltmind.jp/) for providing the computational resources used to train this model. |