Fix metadata
#1
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,4 +1,4 @@
|
|
| 1 |
-
|
| 2 |
base_model: Models/llama3-8b-instruct
|
| 3 |
library_name: peft
|
| 4 |
language:
|
|
@@ -7,22 +7,16 @@ language:
|
|
| 7 |
|
| 8 |
# Model Card for Model ID
|
| 9 |
|
| 10 |
-
<!-- Provide a quick summary of what the model is/does. -->
|
| 11 |
-
|
| 12 |
# π€ PIP-KAG: Mitigating Knowledge Conflicts in Knowledge-Augmented Generation via Parametric Pruning
|
| 13 |
|
| 14 |
This is the official model for **[PIP-KAG: Mitigating Knowledge Conflicts in Knowledge-Augmented Generation via Parametric Pruning](https://arxiv.org/pdf/2502.15543)**.
|
| 15 |
|
| 16 |
The PIP-KAG model is designed to address **knowledge conflicts** in **knowledge-augmented generation** tasks by leveraging a **parametric pruning** strategy, improving the **contextual faithfulness** of language models during knowledge-intensive generation.
|
| 17 |
|
| 18 |
-
---
|
| 19 |
-
|
| 20 |
## π **Paper**
|
| 21 |
For a detailed explanation of the methodology and experiments, please refer to our paper:
|
| 22 |
[**PIP-KAG: Mitigating Knowledge Conflicts in Knowledge-Augmented Generation via Parametric Pruning**](https://arxiv.org/abs/2502.15543)
|
| 23 |
|
| 24 |
-
---
|
| 25 |
-
|
| 26 |
## π Reproduce the Results
|
| 27 |
To reproduce the experiments and benchmarks from the paper, follow the instructions provided in the official GitHub repository:
|
| 28 |
[π GitHub: OpenBMB/PIP-KAG](https://github.com/OpenBMB/PIP-KAG).
|
|
@@ -46,5 +40,4 @@ If you use PIP-KAG in your work, please consider citing our paper:
|
|
| 46 |
url={https://arxiv.org/abs/2502.15543},
|
| 47 |
}
|
| 48 |
|
| 49 |
-
```
|
| 50 |
-
|
|
|
|
| 1 |
+
---
|
| 2 |
base_model: Models/llama3-8b-instruct
|
| 3 |
library_name: peft
|
| 4 |
language:
|
|
|
|
| 7 |
|
| 8 |
# Model Card for Model ID
|
| 9 |
|
|
|
|
|
|
|
| 10 |
# π€ PIP-KAG: Mitigating Knowledge Conflicts in Knowledge-Augmented Generation via Parametric Pruning
|
| 11 |
|
| 12 |
This is the official model for **[PIP-KAG: Mitigating Knowledge Conflicts in Knowledge-Augmented Generation via Parametric Pruning](https://arxiv.org/pdf/2502.15543)**.
|
| 13 |
|
| 14 |
The PIP-KAG model is designed to address **knowledge conflicts** in **knowledge-augmented generation** tasks by leveraging a **parametric pruning** strategy, improving the **contextual faithfulness** of language models during knowledge-intensive generation.
|
| 15 |
|
|
|
|
|
|
|
| 16 |
## π **Paper**
|
| 17 |
For a detailed explanation of the methodology and experiments, please refer to our paper:
|
| 18 |
[**PIP-KAG: Mitigating Knowledge Conflicts in Knowledge-Augmented Generation via Parametric Pruning**](https://arxiv.org/abs/2502.15543)
|
| 19 |
|
|
|
|
|
|
|
| 20 |
## π Reproduce the Results
|
| 21 |
To reproduce the experiments and benchmarks from the paper, follow the instructions provided in the official GitHub repository:
|
| 22 |
[π GitHub: OpenBMB/PIP-KAG](https://github.com/OpenBMB/PIP-KAG).
|
|
|
|
| 40 |
url={https://arxiv.org/abs/2502.15543},
|
| 41 |
}
|
| 42 |
|
| 43 |
+
```
|
|
|