--- library_name: transformers datasets: - alessioGalatolo/AMAeval language: - en base_model: - Qwen/Qwen2.5-3B-Instruct --- # Model Card for AMAEval This is the classifier used as the dynamic benchmark for [AMAeval](https://github.com/alessioGalatolo/AMAeval). Given appropriate reasoning in input, it will give a score in \[0,1\]. ## Model Details ### Model Description This is a fine-tuned version of Qwen/Qwen2.5-3B-Instruct. ### Model Sources - **Repository:** [https://github.com/alessioGalatolo/AMAeval](https://github.com/alessioGalatolo/AMAeval) - **Paper [optional]:** TBA ## Uses This model is to be used as described in [https://github.com/alessioGalatolo/AMAeval](https://github.com/alessioGalatolo/AMAeval). Do not use the model outside its intended purpose. ## Training Details ### Training Data The dataset used to train this model is available at [https://huggingface.co/datasets/alessioGalatolo/AMAeval](https://huggingface.co/datasets/alessioGalatolo/AMAeval) ## Citation **BibTeX:** ```bibtex @incollection{galatolo2025amaeval, title = {Beyond Ethical Alignment: Evaluating LLMs as Artificial Moral Assistants}, author = {Galatolo, Alessio and Rappuoli, Luca Alberto and Winkle, Katie and Beloucif, Meriem}, booktitle={ECAI 2025}, pages={}, # TBA year={2025}, publisher={IOS Press} } ```