Model Card: Llama-3.1-8B-Instruct_236K
This model is a domain-adapted version of meta-llama/Llama-3.1-8B-Instruct, fine-tuned on 236k EnglishβFrench sentence pairs from the bioinformatics and biomedical domains.
It is designed for English β French Machine Translation.
βοΈ Model Details
Model Description
- Developed by: Jurgi Giraud
- Model type: Multilingual Large Language Models (LLMs)
- Language(s) (NLP): English to French
- License: llama3.1
- Finetuned from model: meta-llama/Llama-3.1-8B-Instruct
This model was fine-tuned as part of a PhD research project investigating domain adaptation for Machine Translation (MT) in low-resource scenario within the bioinformatics domain (English β French). The project explores the performance of compact MT models and Large Language Models (LLMs), including architectures under 1B parameters as well as models in the 3Bβ8B range, with a strong emphasis on resource-efficient fine-tuning strategies. The fine-tuning process made use of Parameter-Efficient Fine-Tuning (PEFT) and quantization, in particular QLoRA (Quantized Low-Rank Adaptation), for larger models (Dettmers et al., 2023).
In total, 5 models were fine-tuned on in-domain data: t5_236k | nllb-200-distilled-600M_236K | madlad400-3b-mt_236k | TowerInstruct-7B-v0.2_236k | and Llama-3.1-8B-Instruct_236K (π current model)
π Usage
This model is intended to be used for English β French Machine Translation in the bioinformatics domain.
Example (GPU)
Find below an example of basic usage with GPU using Hugging Face's Transformers library.
First, install dependencies:
pip install torch transformers
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="jurgiraud/Llama-3.1-8B-Instruct_236K", torch_dtype=torch.bfloat16, device_map="auto")
messages = [
{"role": "user", "content": "Translate from English to French in the bioinformatics domain. Provide only the translation:\nThe deletion of a gene may result in death or in a block of cell division."},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=False)
print(outputs[0]["generated_text"])
π§ Fine-tuning Details
Fine-tuning Data
The model was fine-tuned on a set of 236k English-French parallel examples consisting of:
- Natural parallel data (bioinformatics and biomedical data)
- Synthetic data, including:
- Back-translation of in-domain monolingual texts
- Paraphrased data
- Terminology-constrained synthetic generation
Fine-tuning dataset available π here.
Fine-tuning Procedure
The model was fine-tuned using QLoRa (Quantized Low-Rank Adaptation).
Fine-tuning performed with transformers SFTTrainer.
Template
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{context}<|eot_id|><|start_header_id|>user<|end_header_id|>
{question}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{answer}<|eot_id|>
Fine-tuning Hyperparameters
Key hyperparameters and training setup:
- Approach: QLoRA (4-bit quantization + LoRA adapters)
- LoRA config:
r=8,lora_alpha=16,lora_dropout=0.05,target_modules=["q_proj", "v_proj", "o_proj"] - Training: 6 epochs, learning rate =
1e-4, batch size =4(per device), gradient accumulation =8 - Precision: bfloat16 (bf16)
- Optimizer:
paged_adamw_8bit
π Evaluation
The model was evaluated on an in-domain bioinformatics test set using standard MT metrics.
Testing Data & Metrics
Testing Data
Test set available π here.
Metrics
- BLEU
- chrF++ (chrF2)
- TER
- COMET
Results
Results from automated metrics. Baseline vs domain-adapted model. Best scores in bold.
| Models | BLEUβ | chRF2β | TERβ | COMETβ |
|---|---|---|---|---|
| Baseline model Llama-3.2-1B-Instruct | 40.39 | 69.41 | 50.36 | 85.01 |
| Domain-adapted model Llama-3.2-1B-Instruct_236K | 44.57 | 71.44 | 45.68 | 85.55 |
π± Environmental Impact
The fine-tuning carbon footprint was estimated using the Green Algorithms framework (Lannelongue et al., 2021).
- Carbon emissions: 7.67 kgCOβe
- Energy consumption: 33.18 kWh
π Citation
BibTeX:
@phdthesis{giraud2026bioinformaticsMT,
title = {Developing Machine Translation for Bioinformatics: An Exploration into Domain-Specific Terminology, Domain Adaptation, and Evaluation},
author = {Giraud, Jurgi},
school = {The Open University},
year = {2026},
type = {Doctor of Philosophy ({PhD}) thesis},
doi = {10.21954/ou.ro.00109555},
url = {https://doi.org/10.21954/ou.ro.00109555},
}
- Downloads last month
- 14
Model tree for jurgiraud/Llama-3.1-8B-Instruct_236K
Base model
meta-llama/Llama-3.1-8B