|
|
--- |
|
|
license: mit |
|
|
inference: false |
|
|
language: |
|
|
- en |
|
|
--- |
|
|
|
|
|
# sle-base |
|
|
|
|
|
This is a model for the SLE metric described in the original paper. It is based on [`roberta-base`](https://huggingface.co/roberta-base) with an added regression head. |
|
|
|
|
|
Install the [python library](https://github.com/liamcripwell/sle). |
|
|
|
|
|
SLE scores can be calculated within python as shown in the example below. |
|
|
|
|
|
For a raw estimation of a sentence's simplicity, use `'sle'`, but to evaluate sentence simplification systems we recommend providing the input sentences and using `'sle_delta'` ($\Delta \text{SLE}$). See the paper for further details. |
|
|
|
|
|
|
|
|
```python |
|
|
from sle.scorer import SLEScorer |
|
|
|
|
|
scorer = SLEScorer("liamcripwell/sle-base") |
|
|
|
|
|
texts = [ |
|
|
"Here is a simple sentence.", |
|
|
"Here is an additional sentence that makes use of more complex terminology." |
|
|
] |
|
|
|
|
|
# raw simplicity estimates |
|
|
results = scorer.score(texts) |
|
|
print(results) # {'sle': [3.9842946529388428, 0.5840105414390564]} |
|
|
|
|
|
# delta from input sentences |
|
|
results = scorer.score([texts[0]], inputs=[texts[1]]) |
|
|
print(results) # {'sle': [3.9842941761016846], 'sle_delta': [3.4002838730812073]} |
|
|
``` |