Update README.md
Browse files
README.md
CHANGED
|
@@ -24,7 +24,7 @@ This model is fine-tuned on a textual dataset for analog circuit knowledge learn
|
|
| 24 |
The model achieves **85.04% accuracy** on the AMSBench-TQA benchmark, showing a **15.67% improvement** over the initial Qwen2.5-32B-Instruct model.
|
| 25 |
|
| 26 |
## Limitations
|
| 27 |
-
While
|
| 28 |
|
| 29 |
## Sample Usage
|
| 30 |
You can use this model with the Hugging Face `transformers` library:
|
|
|
|
| 24 |
The model achieves **85.04% accuracy** on the AMSBench-TQA benchmark, showing a **15.67% improvement** over the initial Qwen2.5-32B-Instruct model.
|
| 25 |
|
| 26 |
## Limitations
|
| 27 |
+
While this model demonstrates good performance on the AMSBench-TQA benchmark, it is specialized for this domain. Its applicability and performance in other, unrelated domains may be limited. Users should be aware that, like all language models, it may occasionally generate incorrect or nonsensical information, especially for highly novel or unrepresented concepts within its training data.
|
| 28 |
|
| 29 |
## Sample Usage
|
| 30 |
You can use this model with the Hugging Face `transformers` library:
|