๐Ÿ”ง GriceBench-Repair

Rewrites Gricean maxim violations into cooperative dialogue โ€” surgically, not generally.

License HuggingFace Python 3.8+

Part of the GriceBench system โ€” GitHub | ๐Ÿ” Detector | โšก DPO Generator


What This Model Does

GriceBench-Repair is a T5-base seq2seq model that rewrites Gricean maxim violations into cooperative responses. It is violation-type-aware: different maxims use different generation strategies because the nature of the repair task differs.

Violation Decoding Strategy Why
Quantity Beam search (n=4) + length constraints Needs precise length control
Quality Beam search (n=4) + repetition penalty Needs factual precision
Manner Nucleus sampling (T=0.85, top-p=0.92) Needs creative diverse rewrites
Relation NOT this model โ€” use FAISS retrieval Entire response is off-topic; editing cannot fix it

Violation removal rate: 93.0% (post-fix evaluation, N=200)


Quick Start

from transformers import T5ForConditionalGeneration, T5Tokenizer
import torch

model_name = "Pushkar27/GriceBench-Repair"
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
model.eval()

def repair_violation(context: str, response: str, violation_type: str) -> str:
    assert violation_type in ["quantity", "quality", "manner"], \
        "Relation violations must use the FAISS retrieval system โ€” not this model."

    input_text = f"fix {violation_type}: [CONTEXT] {context} [RESPONSE] {response}"
    inputs = tokenizer(input_text, return_tensors="pt", max_length=256, truncation=True)

    with torch.no_grad():
        if violation_type == "manner":
            output_ids = model.generate(
                **inputs,
                do_sample=True, temperature=0.85, top_p=0.92,
                max_length=128, min_length=8,
                repetition_penalty=1.5, no_repeat_ngram_size=3,
            )
        else:
            output_ids = model.generate(
                **inputs,
                num_beams=4, max_length=128, min_length=8,
                repetition_penalty=1.5, no_repeat_ngram_size=3,
            )

    return tokenizer.decode(output_ids[0], skip_special_tokens=True)

# Quantity (too short)
print(repair_violation(
    context="What do you think about commercial space travel?",
    response="It's fine.",
    violation_type="quantity"
))

# Manner (ambiguous pronouns)
print(repair_violation(
    context="Alice told Bob she would handle the project.",
    response="She said she would do it before she left.",
    violation_type="manner"
))

Performance

Violation removal rate: 93.0% (post-fix evaluation)

Per-maxim BLEU scores on the repair validation set (N=401):

Violation Type BLEU Notes
Quality 97.8% Near-perfect factual correction
Manner 92.5% Strong clarity improvements
Quantity 61.8% Harder โ€” requires insertions/deletions
Relation N/A Route to FAISS retrieval

Degeneracy fix (before vs. after violation-type-aware decoding):

Maxim Before Fix After Fix Improvement
Quantity 30.1% degenerate 2.1% โˆ’28.0pp
Manner 93.3% degenerate 4.5% โˆ’88.8pp
Overall 64.4% degenerate 5.2% โˆ’59.2pp

Architecture & Training

  • Base model: google-t5/t5-base (220M parameters)
  • Training pairs: 3,210 (violation โ†’ cooperative) seq2seq pairs
  • Validation pairs: 401
  • Epochs: 5 | Label smoothing: 0.1 | Hardware: Kaggle T4

Three-layer degeneracy prevention:

  1. Violation-type-aware decoding (nucleus sampling for Manner, beam for others)
  2. Post-generation multi-signal filter
  3. Graceful fallback with is_fallback: True flag

Why Relation Violations Use Retrieval

Relation violations mean the entire response is off-topic โ€” there is nothing to edit. We route Relation repairs to a FAISS index over 50,000 Topical-Chat responses (MRR > 0.70, Top-1 accuracy > 60%).


Files

File Description
config.json T5-base configuration
model.safetensors Trained model weights
tokenizer.json SentencePiece tokenizer
tokenizer_config.json Tokenizer configuration

Limitations & Biases

  • Hallucination Risk: T5 can occasionally introduce factual errors during repair. Always verify with the "Quality" detector.
  • Mode Collapse: Avoid using beam search for "Manner" repairs.

Citation

 @article{prabhath2026gricebench,
  title={GriceBench: Operationalizing Gricean Maxims for Cooperative Dialogue Evaluation and Generation},
  author={Prabhath, Pushkar},
  year={2026},
  note={Under review, EMNLP 2026}
}

Related Models

Model Role Link
GriceBench-Detector Detects which maxim was violated ๐Ÿ” Detector
GriceBench-Repair Repairs violations (this model) You are here
GriceBench-DPO Generates cooperative responses โšก DPO

GitHub: https://github.com/PushkarPrabhath27/Research-Model


Environmental Impact

Aspect Value
Hardware Used NVIDIA Tesla T4 GPU
Training Time ~2 hours
Estimated Carbon Footprint ~0.25 kg CO2eq
Downloads last month
1,058
Safetensors
Model size
0.2B params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Pushkar27/GriceBench-Repair

Finetuned
(732)
this model

Evaluation results