You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Model Card for Model ID

CUNI-MH is a English-to-Czech translation model built on top of Mistral-7B-v0.1 for WMT24. It was trained using QLoRA+SFT, SLERP merging, LoRA+Contrastive Preference Optimization (CPO), linear weight merging. For details, please check the paper. Note that the model was fine-tuned for the translation task only and we don't expect it to perform well in other tasks. It may also be very sensitive to using the exact prompt template that was used during the training. Note that we did not train it on Czech->English translion.

Model Details

Model Description

Model Sources [optional]

Usage

We recommend using vLLM for high-throughput inference, either directly or using the OpenAI-like server. If you use the model, please make sure to use the correct prompt template, as we expect the model to be sensitive to its changes.

vLLM Python

A minimal usage example using vLLM:

import vllm


MODEL = "wmt24-cuni/CUNI-MH"
llm = vllm.LLM(
    MODEL,
    enforce_eager=True,
    seed=42,
)


def format_prompt(src_lang, tgt_lang, src):
    return "### Instruction:\nTranslate Input from {src_lang} to {tgt_lang}\n### Glossary:\n\n### Previous text:\n\n### Input:\n{src}\n### Response:\n".format(
        src_lang=src_lang,
        tgt_lang=tgt_lang,
        src=src,
    )

prompt = format_prompt("English", "Czech", "Yesterday, there was a whole lot of umbrellas falling from the sky.")


sampling_params = vllm.SamplingParams(
    temperature=0,
    max_tokens=512,
    stop=["\n"],
)

outputs = llm.generate(
    prompt,
    sampling_params=sampling_params,
)
print(outputs)

vLLM OpenAI-like Server

vllm serve \
        wmt24-cuni/CUNI-MH \
        --dtype auto \
        --host 0.0.0.0 \

Notes

Note that for the WMT25 submission, we split the input segments into segments of at most 256 input tokens on sentence boundaries. For this purpose, we used the sentence-splitter library.

Citation [optional]

BibTeX:

@inproceedings{hrabal-etal-2024-cuni,
    title = "{CUNI} at {WMT}24 General Translation Task: {LLM}s, ({Q}){L}o{RA}, {CPO} and Model Merging",
    author = "Hrabal, Miroslav  and
      Jon, Josef  and
      Popel, Martin  and
      Luu, Nam  and
      Semin, Danil  and
      Bojar, Ond{\v{r}}ej",
    editor = "Haddow, Barry  and
      Kocmi, Tom  and
      Koehn, Philipp  and
      Monz, Christof",
    booktitle = "Proceedings of the Ninth Conference on Machine Translation",
    month = nov,
    year = "2024",
    address = "Miami, Florida, USA",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.wmt-1.16/",
    doi = "10.18653/v1/2024.wmt-1.16",
    pages = "232--246",
    abstract = "This paper presents the contributions of Charles University teams to the WMT24 General Translation task (English to Czech, German and Russian, and Czech to Ukrainian), and the WMT24 Translation into Low-Resource Languages of Spain task.Our most elaborate submission, CUNI-MH for en2cs, is the result of fine-tuning Mistral 7B v0.1 for translation using a three-stage process: Supervised fine-tuning using QLoRA, Contrastive Preference Optimization, and merging of model checkpoints. We also describe the CUNI-GA, CUNI-Transformer and CUNI-DocTransformer submissions, which are based on our systems from the previous year.Our en2ru system CUNI-DS uses a similar first stage as CUNI-MH (QLoRA for en2cs) and follows with transferring to en2ru.For en2de (CUNI-NL), we experimented with a LLM-based speech translation system, to translate without the speech input.For the Translation into Low-Resource Languages of Spain task, we performed QLoRA fine-tuning of a large LLM on a small amount of synthetic (backtranslated) data."
}

Model Card Contact

[More Information Needed]

Downloads last month
15
Safetensors
Model size
7B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for wmt24-cuni/CUNI-MH

Finetuned
(1030)
this model