YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Span Consistency Network (SCN) + DeBERTa-v3-Large

Custom architecture + custom loss (SCRD) trained for span-level extraction.

Architecture

  • Encoder: microsoft/deberta-v3-large
  • Span Consistency Network (SCN)
  • Marker-specific span width priors
  • Cross-marker attention (Actor โ†’ Action โ†’ Effect)
  • Loss: Span-Count Regularized Dice (SCRD)

Files

  • model.pt: PyTorch state dict + config
  • Tokenizer files: standard HuggingFace tokenizer artifacts

Usage

This is not a transformers.PreTrainedModel.

Load manually in PyTorch:

import torch
ckpt = torch.load('model.pt', map_location='cpu')
model.load_state_dict(ckpt['model_state_dict'])

Citation

If you use this model, please cite the associated paper or repository.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support