Reproduction of evaluation scores
Hey Liquid Team,
I'm working on reproducing all of the evaluation numbers published in this work (going well so far!). I've been following along with the methodology in https://arxiv.org/abs/2511.23404 for now, but it's missing the methodology for AIME25. Could you share what your setup was to reproduce that result (is it few shot? any specific prompt?)
Thanks,
Lino
Hey, thanks for your message. We used the methodology described by ArtificialAnalysis here.
Please note that we changed our methodology quite a bit since LFM2, as described in this paragraph from the model card:
GPQA, MMLU-Pro, IFBench, and AIME25 follow ArtificialAnalysis's methodology. For IFEval and Multi-IF, we report the average score across strict and loose prompt and instruction accuracies. For BFCLv3, we report the final weighted average score with a custom Liquid handler to support our tool use template.
Thanks for this important work, and let us know if you have other questions!