Improve model card with metadata and links
Browse filesHi! I'm Niels from the Hugging Face community team.
I've opened this PR to enhance the model card for SteuerLLM by adding relevant metadata and providing a more comprehensive overview of the model.
Key changes include:
- Added metadata tags for `pipeline_tag`, `library_name`, `license`, `language`, and linked associated `datasets`.
- Clearly linked the model to its paper, GitHub repository, and demo.
- Provided a structured "Model Description" summarizing its architecture, training methodology, and unique aspects, drawing from the paper abstract and GitHub README.
- Included details on model usage and recommended inference parameters.
- Retained the existing BibTeX citation and explicitly stated the license.
Merging this will make the model more discoverable on the Hub and provide users with a clearer understanding of its capabilities and origin.
|
@@ -1,13 +1,49 @@
|
|
| 1 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
|
|
|
|
| 3 |
|
| 4 |
-
|
| 5 |
|
| 6 |
-
|
| 7 |
|
| 8 |
-
|
| 9 |
-
|
|
|
|
| 10 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 11 |
|
| 12 |
```bibtex
|
| 13 |
@article{steuerllm,
|
|
@@ -18,3 +54,7 @@ Recommended Temperature: 0.3
|
|
| 18 |
url = {https://arxiv.org/abs/2602.11081}
|
| 19 |
}
|
| 20 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
language:
|
| 3 |
+
- de
|
| 4 |
+
license: other
|
| 5 |
+
library_name: transformers
|
| 6 |
+
pipeline_tag: text-generation
|
| 7 |
+
datasets:
|
| 8 |
+
- windprak/steuerllm_pretraining_dataset
|
| 9 |
+
- windprak/steuerllm_instruct_dataset
|
| 10 |
+
- windprak/SteuerEx
|
| 11 |
+
---
|
| 12 |
|
| 13 |
+
# SteuerLLM: Local specialized large language model for German tax law analysis
|
| 14 |
|
| 15 |
+
SteuerLLM is a domain-adapted Large Language Model (LLM) consisting of 28 billion parameters, specifically designed for German tax law analysis. It was introduced in the paper [SteuerLLM: Local specialized large language model for German tax law analysis](https://huggingface.co/papers/2602.11081).
|
| 16 |
|
| 17 |
+
The model excels in domains governed by strict formal rules, precise terminology, and legally binding structures, such as tax law, where correct answers require exact statutory citation, structured legal argumentation, and numerical accuracy.
|
| 18 |
|
| 19 |
+
- **Paper:** [SteuerLLM: Local specialized large language model for German tax law analysis](https://huggingface.co/papers/2602.11081)
|
| 20 |
+
- **GitHub Repository:** [https://github.com/windprak/steuerllm](https://github.com/windprak/steuerllm)
|
| 21 |
+
- **Demo:** [https://steuerllm.i5.ai.fau.de/](https://steuerllm.i5.ai.fau.de/)
|
| 22 |
|
| 23 |
+
## Model Description
|
| 24 |
+
|
| 25 |
+
SteuerLLM is based on an expanded Mistral Small architecture (extended from 24B to 28B parameters through a block expansion method). It was trained on a large-scale synthetic dataset generated from authentic German university tax law examination material using a controlled retrieval-augmented pipeline.
|
| 26 |
+
|
| 27 |
+
The training procedure follows a two-stage approach:
|
| 28 |
+
1. **Continual Pretraining:** The base model's representations are adapted to tax-specific terminology and concepts by pretraining on domain-filtered web data.
|
| 29 |
+
2. **Instruction Fine-tuning:** The model is then fine-tuned on synthetically generated question-answer pairs derived from primary German legal sources (e.g., EStG, AO, KStG) using the "Water Fountain Algorithm." This algorithm employs retrieval-augmented generation with semantic ranking to ensure factual grounding and contextual relevance.
|
| 30 |
+
|
| 31 |
+
SteuerLLM consistently outperforms general-purpose instruction-tuned models of comparable size and, in several cases, substantially larger systems, demonstrating the critical role of domain-specific data and architectural adaptation for performance on realistic legal reasoning tasks.
|
| 32 |
+
|
| 33 |
+
## Evaluation
|
| 34 |
+
|
| 35 |
+
The model's performance was evaluated using **SteuerEx**, the first open benchmark derived from authentic German university tax law examinations. SteuerEx comprises 115 expert-validated examination questions spanning six core tax law domains and multiple academic levels, utilizing a statement-level, partial-credit evaluation framework.
|
| 36 |
+
|
| 37 |
+
## Usage
|
| 38 |
+
|
| 39 |
+
SteuerLLM can be served via various frameworks, including **Transformers**, **vLLM**, and **SGLang**, as it is based on the Mistral architecture.
|
| 40 |
+
|
| 41 |
+
**Recommended Inference Parameters:**
|
| 42 |
+
- **Temperature:** 0.3
|
| 43 |
+
|
| 44 |
+
## Citation
|
| 45 |
+
|
| 46 |
+
If you use this work, please cite:
|
| 47 |
|
| 48 |
```bibtex
|
| 49 |
@article{steuerllm,
|
|
|
|
| 54 |
url = {https://arxiv.org/abs/2602.11081}
|
| 55 |
}
|
| 56 |
```
|
| 57 |
+
|
| 58 |
+
## License
|
| 59 |
+
|
| 60 |
+
Research and academic use only.
|