Llama-3.1-GuideX-8B / README.md
nielsr's picture
nielsr HF Staff
Add library_name and link to Github repo
f2d4d96 verified
|
raw
history blame
2.21 kB
metadata
base_model:
  - meta-llama/Llama-3.1-8B
datasets:
  - HiTZ/GuideX_pre-training_data
  - ACE05
  - bc5cdr
  - conll2003
  - ncbi_disease
  - conll2012_ontonotesv5
  - rams
  - tacred
  - wnut_17
language:
  - en
license: apache-2.0
metrics:
  - f1
tags:
  - code
  - text-generation-inference
  - Information Extraction
  - IE
  - Named Entity Recogniton
  - Event Extraction
  - Relation Extraction
  - LLaMA
pipeline_tag: text-generation
library_name: transformers

Model Card for GuideX-8B


Guided Synthetic Data Generation for Zero-Shot Information Extraction


Llama-3.1-GuideX-8B is an 8-billion-parameter language model fine-tuned for high-performance zero-shot Information Extraction (IE). The model is trained to follow detailed annotation guidelines provided as Python dataclasses, allowing it to adapt to new domains and schemas on the fly without requiring task-specific examples.

This model achieves state-of-the-art performance on zero-shot Named Entity Recognition (NER) by first training on GuideX, a large-scale synthetic dataset with executable guidelines, and then fine-tuning on a collection of gold-standard IE datasets.

Model Description

  • Developed by: Neil De La Fuente, Oscar Sainz, Iker García-Ferrero, Eneko Agirre
  • Institution: HiTZ Basque Center for Language Technology - Ixa NLP Group, University of the Basque Country (UPV/EHU), Technical University of Munich (TUM)
  • Model type: Decoder-only Transformer (Text Generation)
  • Language(s): English
  • License: Llama 2 Community License
  • Finetuned from model: meta-llama/Llama-3.1-8B