Update README.md
Browse files
README.md
CHANGED
|
@@ -96,46 +96,3 @@ print(tokenizer.decode(summary_ids[0], skip_special_tokens=True))
|
|
| 96 |
| ROUGE-1 | 0.35 |
|
| 97 |
| ROUGE-2 | 0.19 |
|
| 98 |
| ROUGE-L | 0.31 |
|
| 99 |
-
|
| 100 |
-
## Environmental Impact
|
| 101 |
-
|
| 102 |
-
* **Hardware Type:** NVIDIA A100 GPU
|
| 103 |
-
* **Hours used:** 2\~3 hours
|
| 104 |
-
* **Cloud Provider:** \[Optional: e.g., Azure / AWS / 自有服务器]
|
| 105 |
-
* **Compute Region:** \[Optional]
|
| 106 |
-
* **Carbon Emitted:** Estimated using [MLCO2 calculator](https://mlco2.github.io/impact#compute)
|
| 107 |
-
|
| 108 |
-
## Technical Specifications
|
| 109 |
-
|
| 110 |
-
### Model Architecture and Objective
|
| 111 |
-
|
| 112 |
-
基于 BART 的编码-解码结构,目标是最小化生成摘要与参考摘要之间的交叉熵损失。
|
| 113 |
-
|
| 114 |
-
### Compute Infrastructure
|
| 115 |
-
|
| 116 |
-
* **GPU:** NVIDIA A100 80GB
|
| 117 |
-
* **Software:** PyTorch, Transformers, PEFT, Datasets, jieba
|
| 118 |
-
|
| 119 |
-
## Citation
|
| 120 |
-
|
| 121 |
-
```bibtex
|
| 122 |
-
@misc{your2025bartlora,
|
| 123 |
-
title={LoRA Fine-tuned BART for Chinese Summarization},
|
| 124 |
-
author={Your Name},
|
| 125 |
-
year={2025},
|
| 126 |
-
howpublished={\url{https://huggingface.co/your-username/your-model-name}},
|
| 127 |
-
}
|
| 128 |
-
```
|
| 129 |
-
|
| 130 |
-
## Model Card Contact
|
| 131 |
-
|
| 132 |
-
* \[[your-email@example.com](mailto:your-email@example.com)]
|
| 133 |
-
|
| 134 |
-
```
|
| 135 |
-
|
| 136 |
-
---
|
| 137 |
-
|
| 138 |
-
你可以根据需要替换 `"your-username/your-model-name"` 和 `"Your Name"` 以及邮箱等信息。
|
| 139 |
-
|
| 140 |
-
如果你还需要我帮你生成 [上传脚本](f) 或 [README 生成器](f),可以告诉我。
|
| 141 |
-
```
|
|
|
|
| 96 |
| ROUGE-1 | 0.35 |
|
| 97 |
| ROUGE-2 | 0.19 |
|
| 98 |
| ROUGE-L | 0.31 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|