Armenian-Text-Embeddings-2 (ATE-2)

Model Details

  • Model Name: Armenian-Text-Embeddings-2-large
  • Model Type: Text Embeddings for Armenian Language
  • Base Model: intfloat/multilingual-e5-large
  • Version: 2.0
  • Last Updated: March 2026
  • Model Architecture: Transformer-based embeddings model
  • Input: Armenian text
  • Output: Dense vector embeddings

Quick Start

import torch.nn.functional as F

from torch import Tensor
from transformers import AutoTokenizer, AutoModel

tokenizer = AutoTokenizer.from_pretrained('Metric-AI/armenian-text-embeddings-2-large')
model = AutoModel.from_pretrained('Metric-AI/armenian-text-embeddings-2-large')


def average_pool(last_hidden_states: Tensor,
                 attention_mask: Tensor) -> Tensor:
    last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
    return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]


# Each input text should start with "query: " or "passage: ", even for non-English texts.
# For tasks other than retrieval, you can simply use the "query: " prefix.
input_texts = [
    'query: Ինչպե՞ս պատրաստել տոլմա',  # How to make tolma
    'query: Քանի՞ գրամ սպիտակուց է հարկավոր օրական',  # How many grams of protein needed daily
    
    """passage: Տոլմայի բաղադրատոմս՝
    Բաղադրիչներ՝
    - 500գ աղացած միս
    - 1 բաժակ բրինձ
    - Խաղողի տերևներ
    - 2 գլուխ սոխ
    - Համեմունքներ՝ աղ, սև պղպեղ, քարի
    
    Պատրաստման եղանակը՝
    1. Միսը խառնել բրնձի, մանր կտրատած սոխի և համեմունքների հետ
    2. Խաղողի տերևները լվանալ և թողնել տաք ջրի մեջ 10 րոպե
    3. Լցոնել տերևները և դասավորել կաթսայի մեջ
    4. Եփել դանդաղ կրակի վրա 45-60 րոպե""",  # Detailed tolma recipe
    
    """passage: Սպիտակուցի օրական չափաբաժինը կախված է մարդու քաշից, սեռից և ֆիզիկական ակտիվությունից: 
    Միջին հաշվով, կանանց համար խորհուրդ է տրվում 46-50 գրամ սպիտակուց օրական: 
    Մարզիկների համար այս թիվը կարող է հասնել մինչև 1.6-2 գրամ մարմնի քաշի յուրաքանչյուր կիլոգրամի համար: 
    Հղիների համար պահանջվում է լրացուցիչ 25 գրամ սպիտակուց:
    
    Սպիտակուցի հարուստ աղբյուրներ են՝
    - Հավի միս (31գ/100գ)
    - Ձու (13գ/100գ)
    - Ոսպ (25գ/100գ)
    - Մածուն (3.5գ/100գ)"""] # Detailed protein intake advice

# Tokenize the input texts
batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])

# normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())

# [[79.6805419921875, 38.23454284667969], [41.4409294128418, 79.22757720947266]]

Support for Sentence Transformers

Below is an example for usage with sentence_transformers.

from sentence_transformers import SentenceTransformer
model = SentenceTransformer('Metric-AI/armenian-text-embeddings-2-large')

embeddings = model.encode(input_texts, normalize_embeddings=True)

Intended Use

Primary Intended Uses

  • Retrieval-augmented generation (RAG)
  • Semantic search in Armenian
  • Document similarity computation
  • Cross-lingual text understanding
  • Text classification tasks
  • Information retrieval

Training Procedure

This model was trained following the recipe described in Less is More: Adapting Text Embeddings for Low-Resource Languages with Small Scale Noisy Synthetic Data.

Training Details

  • Weight Averaging:
    • Base model (multilingual-e5-large): 0.5 weight
    • Fine-tuned checkpoint: 0.5 weight
  • Hardware: 8 x MI250x GPUs
  • Training Parameters:
    • Epochs: 5
    • Batch Size: 1024 per GPU
    • Learning Rate: 7e-5
    • Weight Decay: 0.01
    • Warmup Ratio: 0.2
    • Maximum Sequence Length: 128 tokens
    • FP16 Training: Enabled
    • Gradient Clipping: 1.0

Optimization Configuration

  • Framework: DeepSpeed Stage 2
  • Optimizer: AdamW with auto weight decay
  • Mixed Precision: FP16 with dynamic loss scaling
  • ZeRO Optimization: Stage 2 with:
    • Allgather partitions
    • Overlap communications
    • Contiguous gradients
  • Additional Features:
    • Gradient checkpointing
    • Tensor parallelism (size: 2)

Performance and Limitations

Capabilities

  • Effective for semantic similarity tasks in Armenian
  • Suitable for document classification and clustering
  • Transliterated query handling

Limitations

  • Performance may vary on domain-specific terminology
  • May not capture Armenian-specific cultural contexts effectively
  • Limited by the quality of training data translations

Known Biases

  • May exhibit biases present in Reddit content

Ethical Considerations

  • Data Privacy: Training data from public Reddit content
  • Potential Misuse: Could be misused for content manipulation or spam
  • Bias: May perpetuate social biases present in Reddit content
  • Recommendations:
    • Monitor system outputs for harmful content
    • Implement content filtering for production use
    • Regular bias assessment recommended

Technical Specifications

  • Model Size: 0.6B parameters (based on e5-large)
  • Embedding Dimension: 1024
  • Max Sequence Length: 512 tokens
  • Framework Compatibility:
    • PyTorch
    • Hugging Face Transformers
    • DeepSpeed

Citation

@misc{armenian-text-embeddings-2-large,
  author = {Navasardyan, Zaruhi and Bughdaryan, Spartak and Minasyan, Bagratuni and Davtyan, Hrant},
  title = {Armenian-Text-Embeddings-2-large: Enhanced Armenian Language Embeddings},
  year = {2026}
}

@inproceedings{navasardyan2026lessismore,
  title={Less is More: Adapting Text Embeddings for Low-Resource Languages with Small Scale Noisy Synthetic Data},
  author={Navasardyan, Zaruhi and Bughdaryan, Spartak and Minasyan, Bagratuni and Davtyan, Hrant},
  booktitle={Proceedings of the Workshop on Language Models for Low-Resource Languages (LoResLM) at EACL 2026},
  year={2026}
}

Additional Information

Base Model References

Acknowledgments

  • intfloat for the original multilingual-e5-large model
  • Reddit community for the source content
  • DeepSpeed team for optimization toolkit
  • EuroHPC Joint Undertaking for granting access to the LUMI supercomputer, hosted by CSC (Finland) and the LUMI consortium

Version History

  • 1.0 (November 2024): Initial release
  • 2.0 (March 2026)
Downloads last month
85
Safetensors
Model size
0.6B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Metric-AI/armenian-text-embeddings-2-large

Finetuned
(156)
this model

Dataset used to train Metric-AI/armenian-text-embeddings-2-large

Collection including Metric-AI/armenian-text-embeddings-2-large