|
|
--- |
|
|
language: en |
|
|
tags: |
|
|
- transformers |
|
|
- pytorch |
|
|
library_name: transformers |
|
|
pipeline_tag: text2text-generation |
|
|
license: mit |
|
|
base_model: |
|
|
- google/byt5-small |
|
|
--- |
|
|
|
|
|
# Model |
|
|
This is a sample finetuned model produced under [LLMPot research project](https://github.com/momalab/LLMPot) and explained further in the [related research manuscript](https://arxiv.org/abs/2405.05999). |
|
|
|
|
|
## How to Use |
|
|
|
|
|
This model is a fine-tuned version of [`google/byt5-small`](https://huggingface.co/google/byt5-small) for Modbus protocol emulation. |
|
|
|
|
|
Make sure you have `transformers` and `torch` installed: |
|
|
|
|
|
```bash |
|
|
pip install transformers torch |
|
|
``` |
|
|
|
|
|
Load the model and run a single inference. |
|
|
|
|
|
```python |
|
|
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline |
|
|
|
|
|
tokenizer = AutoTokenizer.from_pretrained("cv43/llmpot") |
|
|
model = AutoModelForSeq2SeqLM.from_pretrained("cv43/llmpot") |
|
|
|
|
|
pipe = pipeline("text2text-generation", model=model, tokenizer=tokenizer, framework="pt") |
|
|
|
|
|
request = "02b10000000b00100000000204ffffffff" |
|
|
result = pipe(request) |
|
|
print(f"Request: {request}, Response: {result[0]['generated_text']}") |
|
|
``` |
|
|
|
|
|
Otherwise you may use our [Space](https://huggingface.co/spaces/cv43/llmpot) application where the model is running on the cloud. |