How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("text-generation", model="xu1998hz/InstructScore")
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("xu1998hz/InstructScore")
model = AutoModelForCausalLM.from_pretrained("xu1998hz/InstructScore")
Quick Links

InstructScore (SEScore3)

An amazing explanation metric (diagnostic report) for text generation evaluation

First step, you may download all required dependencies through: pip3 install -r requirements.txt

To run our metric, you only need five lines

Please visit our github: https://github.com/xu1998hz/SEScore3/

```
from InstructScore import *
refs = ["Normally the administration office downstairs would call me when there’s a delivery."]
outs = ["Usually when there is takeaway, the management office downstairs will call."]
scorer = InstructScore()
batch_outputs, scores_ls = scorer.score(refs, outs)
```
Downloads last month
18
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support