# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("xu1998hz/InstructScore")
model = AutoModelForCausalLM.from_pretrained("xu1998hz/InstructScore")Quick Links
InstructScore (SEScore3)
An amazing explanation metric (diagnostic report) for text generation evaluation
First step, you may download all required dependencies through: pip3 install -r requirements.txt
To run our metric, you only need five lines
Please visit our github: https://github.com/xu1998hz/SEScore3/
```
from InstructScore import *
refs = ["Normally the administration office downstairs would call me when there’s a delivery."]
outs = ["Usually when there is takeaway, the management office downstairs will call."]
scorer = InstructScore()
batch_outputs, scores_ls = scorer.score(refs, outs)
```
- Downloads last month
- 18
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="xu1998hz/InstructScore")