| | --- |
| | library_name: transformers |
| | datasets: |
| | - alessioGalatolo/AMAeval |
| | language: |
| | - en |
| | base_model: |
| | - Qwen/Qwen2.5-3B-Instruct |
| | --- |
| | |
| | # Model Card for AMAEval |
| |
|
| | This is the classifier used as the dynamic benchmark for [AMAeval](https://github.com/alessioGalatolo/AMAeval). Given appropriate reasoning in input, it will give a score in \[0,1\]. |
| |
|
| |
|
| |
|
| | ## Model Details |
| |
|
| | ### Model Description |
| |
|
| | This is a fine-tuned version of Qwen/Qwen2.5-3B-Instruct. |
| |
|
| | ### Model Sources |
| |
|
| | <!-- Provide the basic links for the model. --> |
| |
|
| | - **Repository:** [https://github.com/alessioGalatolo/AMAeval](https://github.com/alessioGalatolo/AMAeval) |
| | - **Paper [optional]:** TBA |
| |
|
| | ## Uses |
| |
|
| | This model is to be used as described in [https://github.com/alessioGalatolo/AMAeval](https://github.com/alessioGalatolo/AMAeval). Do not use the model outside its intended purpose. |
| |
|
| |
|
| | ## Training Details |
| |
|
| | ### Training Data |
| |
|
| | <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> |
| |
|
| | The dataset used to train this model is available at [https://huggingface.co/datasets/alessioGalatolo/AMAeval](https://huggingface.co/datasets/alessioGalatolo/AMAeval) |
| |
|
| | ## Citation |
| |
|
| | <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> |
| |
|
| | **BibTeX:** |
| |
|
| |
|
| | ```bibtex |
| | @incollection{galatolo2025amaeval, |
| | title = {Beyond Ethical Alignment: Evaluating LLMs as Artificial Moral Assistants}, |
| | author = {Galatolo, Alessio and Rappuoli, Luca Alberto and Winkle, Katie and Beloucif, Meriem}, |
| | booktitle={ECAI 2025}, |
| | pages={}, # TBA |
| | year={2025}, |
| | publisher={IOS Press} |
| | } |
| | ``` |