ToMMeR-pythia-1b_L5_R64

Paper All Models GitHub

ToMMeR is a lightweight probing model extracting emergent mention detection capabilities from early layers representations of any LLM backbone, achieving high Zero Shot recall across a wide set of 13 NER benchmarks.

Model Details

This model can be plugged at layer 5 of EleutherAI/pythia-1b, with a computational overhead not greater than an additional attention head.

Property Value
Base LLM EleutherAI/pythia-1b
Layer 5
#Params 264.2K

Usage

Installation

To use ToMMeR, you need to install its codebase first.

pip install git+https://github.com/VictorMorand/llm2ner.git

Raw inference

By default, ToMMeR outputs span probabilities, but we also propose built-in options for decoding entities.

  • Inputs:
    • tokens (batch, seq): tokens to process,
    • model: LLM to extract representation from.
  • Outputs: (batch, seq, seq) matrix (masked outside valid spans)
from xpm_torch.huggingface import TorchHFHub
from llm2ner import ToMMeR, utils

tommer: ToMMeR = TorchHFHub.from_pretrained("llm2ner/ToMMeR-pythia-1b_L5_R64")
# load Backbone llm, optionnally cut the unused layer to save GPU space.
llm = utils.load_llm( tommer.llm_name, cut_to_layer=tommer.layer,)
tommer.to(llm.device)

#### Raw Inference
text = ["Large language models are awesome"]
print(f"Input text: {text[0]}")

#tokenize in shape (1, seq_len)
tokens = llm.tokenizer(text, return_tensors="pt")["input_ids"].to(llm.device)
# Output raw scores
output = tommer.forward(tokens, llm) # (batch_size, seq_len, seq_len)
print(f"Raw Output shape: {output.shape}")

#use given decoding strategy to infer entities
entities = tommer.infer_entities(tokens=tokens, model=llm, threshold=0.5, decoding_strategy="greedy")
str_entities = [ llm.tokenizer.decode(tokens[0,b:e+1]) for b, e in entities[0]]
print(f"Predicted entities: {str_entities}")

>>>INFO:root:Cut LlamaModel with 16 layers to 7 layers
>>> Input text: Large language models are awesome
>>> Raw Output shape: torch.Size([1, 6, 6])
>>> Predicted entities: ['Large language models']

Fancy Outputs

We also provide inference and plotting utils in llm2ner.plotting.

from xpm_torch.huggingface import TorchHFHub
from llm2ner import ToMMeR, utils, plotting

tommer: ToMMeR = TorchHFHub.from_pretrained("llm2ner/ToMMeR-pythia-1b_L5_R64")
# load Backbone llm, optionnally cut the unused layer to save GPU space.
llm = utils.load_llm( tommer.llm_name, cut_to_layer=tommer.layer,)
tommer.to(llm.device)

text = "Large language models are awesome. While trained on language modeling, they exhibit emergent Zero Shot abilities that make them suitable for a wide range of tasks, including Named Entity Recognition (NER). "

#fancy interactive output
outputs = plotting.demo_inference( text, tommer, llm,
    decoding_strategy="threshold",  # or "greedy" for flat segmentation
    threshold=0.5, # default 50%
    show_attn=True,
)
Large PRED language PRED models are awesome . While trained on language PRED modeling , they exhibit emergent PRED abilities that make them suitable for a wide range of tasks PRED , including Named PRED Entity Recognition ( NER PRED ) .

Please visit the repository for more details and a demo notebook.

Evaluation Results

dataset precision recall f1 n_samples
MultiNERD 0.1909 0.9589 0.3184 154144
CoNLL 2003 0.251 0.7244 0.3728 16493
CrossNER_politics 0.2513 0.9496 0.3975 1389
CrossNER_AI 0.2885 0.9159 0.4388 879
CrossNER_literature 0.312 0.8965 0.4629 916
CrossNER_science 0.3099 0.9246 0.4642 1193
CrossNER_music 0.3411 0.9213 0.4979 945
ncbi 0.1101 0.8713 0.1955 3952
FabNER 0.2884 0.7485 0.4164 13681
WikiNeural 0.1803 0.9358 0.3023 92672
GENIA_NER 0.2171 0.9353 0.3524 16563
ACE 2005 0.2293 0.3416 0.2744 8230
Ontonotes 0.219 0.6803 0.3314 42193
Aggregated 0.2009 0.8835 0.3273 353250
Mean 0.2453 0.8311 0.3711 353250

Citation

If using this model or the approach, please cite the associated paper:

@misc{morand2025tommerefficiententity,
      title={ToMMeR -- Efficient Entity Mention Detection from Large Language Models},
      author={Victor Morand and Nadi Tomeh and Josiane Mothe and Benjamin Piwowarski},
      year={2025},
      eprint={2510.19410},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2510.19410},
}

License

Apache-2.0 (see repository for full text).

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for llm2ner/ToMMeR-pythia-1b_L5_R64

Finetuned
(35)
this model

Paper for llm2ner/ToMMeR-pythia-1b_L5_R64