Add pipeline tag and link to code
#1
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,7 +1,7 @@
|
|
| 1 |
---
|
|
|
|
| 2 |
library_name: transformers
|
| 3 |
license: other
|
| 4 |
-
base_model: meta-llama/Llama-3.2-1B-Instruct
|
| 5 |
tags:
|
| 6 |
- llama-factory
|
| 7 |
- full
|
|
@@ -9,6 +9,7 @@ tags:
|
|
| 9 |
model-index:
|
| 10 |
- name: ScienceLLaMA-1B
|
| 11 |
results: []
|
|
|
|
| 12 |
---
|
| 13 |
|
| 14 |
# ScienceLLaMA-3B
|
|
@@ -16,8 +17,8 @@ model-index:
|
|
| 16 |
β’ π€ <a href="https://huggingface.co/datasets/JingyaoLi/Science-Logits-1.2M" target="_blank">Data </a>
|
| 17 |
β’ π€ <a href="https://huggingface.co/JingyaoLi/ScienceLLaMA-3b" target="_blank">ScienceLLaMA-3B </a>
|
| 18 |
β’ π€ <a href="https://huggingface.co/JingyaoLi/ScienceLLaMA-1b" target="_blank">ScienceLLaMA-1B </a>
|
| 19 |
-
β’ π± <a href="
|
| 20 |
-
β’ π
|
| 21 |
</p>
|
| 22 |
|
| 23 |
This model is a fine-tuned with **Logits-Based Finetuning** on the [JingyaoLi/Science-Logits-1.2M](https://huggingface.co/datasets/JingyaoLi/Science-Logits-1.2M), which integrates the strengths of supervised learning and knowledge distillation by combining teacher logits with ground truth labels. This preserves both correctness and linguistic diversity.
|
|
@@ -56,4 +57,4 @@ The following hyperparameters were used during training:
|
|
| 56 |
- Transformers 4.45.0
|
| 57 |
- Pytorch 2.4.0+cu121
|
| 58 |
- Datasets 2.21.0
|
| 59 |
-
- Tokenizers 0.20.1
|
|
|
|
| 1 |
---
|
| 2 |
+
base_model: meta-llama/Llama-3.2-1B-Instruct
|
| 3 |
library_name: transformers
|
| 4 |
license: other
|
|
|
|
| 5 |
tags:
|
| 6 |
- llama-factory
|
| 7 |
- full
|
|
|
|
| 9 |
model-index:
|
| 10 |
- name: ScienceLLaMA-1B
|
| 11 |
results: []
|
| 12 |
+
pipeline_tag: text-generation
|
| 13 |
---
|
| 14 |
|
| 15 |
# ScienceLLaMA-3B
|
|
|
|
| 17 |
β’ π€ <a href="https://huggingface.co/datasets/JingyaoLi/Science-Logits-1.2M" target="_blank">Data </a>
|
| 18 |
β’ π€ <a href="https://huggingface.co/JingyaoLi/ScienceLLaMA-3b" target="_blank">ScienceLLaMA-3B </a>
|
| 19 |
β’ π€ <a href="https://huggingface.co/JingyaoLi/ScienceLLaMA-1b" target="_blank">ScienceLLaMA-1B </a>
|
| 20 |
+
β’ π± <a href="https://github.com/hiyouga/LLaMA-Factory" target="_blank">Code</a>
|
| 21 |
+
β’ π <a href="https://arxiv.org/abs/2505.24461" target="_blank">Paper</a> <br>
|
| 22 |
</p>
|
| 23 |
|
| 24 |
This model is a fine-tuned with **Logits-Based Finetuning** on the [JingyaoLi/Science-Logits-1.2M](https://huggingface.co/datasets/JingyaoLi/Science-Logits-1.2M), which integrates the strengths of supervised learning and knowledge distillation by combining teacher logits with ground truth labels. This preserves both correctness and linguistic diversity.
|
|
|
|
| 57 |
- Transformers 4.45.0
|
| 58 |
- Pytorch 2.4.0+cu121
|
| 59 |
- Datasets 2.21.0
|
| 60 |
+
- Tokenizers 0.20.1
|