Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
19.4
TFLOPS
31
Mads
PRO
mhenrichsen
Follow
tahamajs's profile picture
Magnusbag's profile picture
rasgaard's profile picture
57 followers
Β·
7 following
mhenrichsen
AI & ML interests
None yet
Recent Activity
updated
a model
4 days ago
syvai/qwen3-14b-translate-eng-da
published
a model
4 days ago
syvai/qwen3-14b-translate-eng-da
replied
to
hannayukhymenko
's
post
10 days ago
Do you translate your benchmarks from English correctly? π€ Turns out, for many languages it is much harder than you can imagine! Introducing Recovered in Translation π together with @aalexandrov https://ritranslation.insait.ai Translating benchmarks is a painful process, requiring a lot of manual inspection and adjustments. You start from setting up the whole pipeline and adapting to every format type, including task specifics. There already exist some massive benchmarks, but they still have some simple (and sometimes silly) bugs, which can hurt the evaluations :( We present a novel automated translation framework to help with that! Eastern and Southern European languages introduce richer linguistic structures compared to English and for benchmarks which heavily rely on grammatical coherence machine translation presents a risk of harming evaluations. We discover potential answer leakage or misleading through grammatical structure of the questions. Some benchmarks are also just outdated and need to be retranslated with newer and better models. We present a framework with novel test-time scaling methods which allow to control time and cost investments, while at the same time mitigate the need for human-in-the-loop verification. While working on Ukrainian-focused MamayLM models, we had to translate 10+ benchmarks in a short span of time. Finding human evaluators is costly and time-consuming, same goes for using professional translators. With our pipeline we were able to do it in 3 daysποΈ We hope our findings will help enable stronger multilingual evaluations and developments. We release all produced benchmarks on Hugging Face together with the source code and Arxiv paper π€ Paper: https://huggingface.co/papers/2602.22207 Code: https://github.com/insait-institute/ritranslation Benchmarks: https://huggingface.co/collections/INSAIT-Institute/multilingual-benchmarks
View all activity
Organizations
mhenrichsen
's Spaces
3
Sort:Β Recently updated
Runtime error
5
syv.ai TTS
π
PrΓΈv syv.ai TTS
Sleeping
1
Axolotl_Launcher
π
Create a training configuration for Axolotl
Runtime error
4
DanskGPT
π