Spaces:
Sleeping
Sleeping
Commit ·
014cf4a
0
Parent(s):
Initial Leesplank Noot demo implementation
Browse files- Gradio interface for Dutch text simplification
- Support for three models: Granite-3.3-2b, Llama-3.2-3b, EuroLLM-1.7b
- Lazy model loading with caching for efficient memory usage
- Performance metrics display (tokens/sec, timing)
- Bilingual Dutch/English interface
- 4 example texts for quick testing
- Optimized for HuggingFace Spaces deployment
🤖 Generated with Claude Code
Co-Authored-By: Claude <noreply@anthropic.com>
- .gitignore +54 -0
- README.md +55 -0
- app.py +235 -0
- requirements.txt +5 -0
.gitignore
ADDED
|
@@ -0,0 +1,54 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Python
|
| 2 |
+
__pycache__/
|
| 3 |
+
*.py[cod]
|
| 4 |
+
*$py.class
|
| 5 |
+
*.so
|
| 6 |
+
.Python
|
| 7 |
+
env/
|
| 8 |
+
venv/
|
| 9 |
+
ENV/
|
| 10 |
+
build/
|
| 11 |
+
develop-eggs/
|
| 12 |
+
dist/
|
| 13 |
+
downloads/
|
| 14 |
+
eggs/
|
| 15 |
+
.eggs/
|
| 16 |
+
lib/
|
| 17 |
+
lib64/
|
| 18 |
+
parts/
|
| 19 |
+
sdist/
|
| 20 |
+
var/
|
| 21 |
+
wheels/
|
| 22 |
+
*.egg-info/
|
| 23 |
+
.installed.cfg
|
| 24 |
+
*.egg
|
| 25 |
+
MANIFEST
|
| 26 |
+
|
| 27 |
+
# PyCharm
|
| 28 |
+
.idea/
|
| 29 |
+
|
| 30 |
+
# VSCode
|
| 31 |
+
.vscode/
|
| 32 |
+
|
| 33 |
+
# Jupyter
|
| 34 |
+
.ipynb_checkpoints
|
| 35 |
+
|
| 36 |
+
# macOS
|
| 37 |
+
.DS_Store
|
| 38 |
+
|
| 39 |
+
# Model cache
|
| 40 |
+
*.pt
|
| 41 |
+
*.bin
|
| 42 |
+
*.safetensors
|
| 43 |
+
models/
|
| 44 |
+
|
| 45 |
+
# Gradio
|
| 46 |
+
flagged/
|
| 47 |
+
gradio_cached_examples/
|
| 48 |
+
|
| 49 |
+
# Environment variables
|
| 50 |
+
.env
|
| 51 |
+
.env.local
|
| 52 |
+
|
| 53 |
+
# Logs
|
| 54 |
+
*.log
|
README.md
ADDED
|
@@ -0,0 +1,55 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
title: Leesplank Noot - Dutch Text Simplification Demo
|
| 3 |
+
emoji: 📝
|
| 4 |
+
colorFrom: blue
|
| 5 |
+
colorTo: green
|
| 6 |
+
sdk: gradio
|
| 7 |
+
sdk_version: 4.44.0
|
| 8 |
+
app_file: app.py
|
| 9 |
+
pinned: false
|
| 10 |
+
models:
|
| 11 |
+
- UWV/leesplank-noot-granite-3.3-2b
|
| 12 |
+
- UWV/leesplank-noot-llama-3.2-3b
|
| 13 |
+
- UWV/leesplank-noot-eurollm-1.7b
|
| 14 |
+
---
|
| 15 |
+
|
| 16 |
+
# Leesplank Noot - Dutch Text Simplification Demo
|
| 17 |
+
|
| 18 |
+
Interactive demo for Dutch text simplification models that convert complex text to B1 reading level.
|
| 19 |
+
|
| 20 |
+
## Models
|
| 21 |
+
|
| 22 |
+
This demo showcases three fine-tuned models:
|
| 23 |
+
|
| 24 |
+
| Model | SARI Score | Speed (tokens/s) | Parameters |
|
| 25 |
+
|-------|------------|------------------|------------|
|
| 26 |
+
| **Granite-3.3-2b** | 67.80 ±0.22 | 9.53 | 2B |
|
| 27 |
+
| **Llama-3.2-3b** | 67.50 ±0.50 | 15.91 | 3B |
|
| 28 |
+
| **EuroLLM-1.7b** | 66.44 ±0.32 | 27.50 | 1.7B |
|
| 29 |
+
|
| 30 |
+
## Features
|
| 31 |
+
|
| 32 |
+
- **Model Selection**: Choose between three specialized models
|
| 33 |
+
- **Real-time Simplification**: Instant text simplification
|
| 34 |
+
- **Example Texts**: Pre-loaded Dutch examples
|
| 35 |
+
- **Performance Metrics**: Token count and generation speed
|
| 36 |
+
- **Bilingual Interface**: Dutch and English instructions
|
| 37 |
+
|
| 38 |
+
## Usage
|
| 39 |
+
|
| 40 |
+
1. Select a model from the dropdown
|
| 41 |
+
2. Enter Dutch text to simplify
|
| 42 |
+
3. Click "Vereenvoudig / Simplify"
|
| 43 |
+
4. View the simplified result
|
| 44 |
+
|
| 45 |
+
## About
|
| 46 |
+
|
| 47 |
+
These models were developed by UWV to make government communication more accessible to citizens with reading difficulties. They are trained on 1.89M Dutch Wikipedia simplifications and achieve B1-level output.
|
| 48 |
+
|
| 49 |
+
## License
|
| 50 |
+
|
| 51 |
+
Apache 2.0
|
| 52 |
+
|
| 53 |
+
## Contact
|
| 54 |
+
|
| 55 |
+
Maintainer: UWV Innovatie Hub - innovatie@uwv.nl
|
app.py
ADDED
|
@@ -0,0 +1,235 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import gradio as gr
|
| 2 |
+
import torch
|
| 3 |
+
from transformers import pipeline
|
| 4 |
+
import time
|
| 5 |
+
from typing import Dict, Optional
|
| 6 |
+
|
| 7 |
+
# Model configurations
|
| 8 |
+
MODELS = {
|
| 9 |
+
"Granite-3.3-2b (Highest Quality)": "UWV/leesplank-noot-granite-3.3-2b",
|
| 10 |
+
"Llama-3.2-3b (Balanced)": "UWV/leesplank-noot-llama-3.2-3b",
|
| 11 |
+
"EuroLLM-1.7b (Fastest)": "UWV/leesplank-noot-eurollm-1.7b"
|
| 12 |
+
}
|
| 13 |
+
|
| 14 |
+
# System prompt for Llama and Granite models
|
| 15 |
+
SYSTEM_PROMPT = """Je bent een AI-assistent die Nederlandse teksten vereenvoudigt naar een helder, toegankelijk niveau voor iedereen, vergelijkbaar met de heldere taal die het Jeugdjournaal gebruikt. Behoud de betekenis en belangrijke informatie, maar gebruik eenvoudigere woorden en kortere zinnen. Schrijf niet kinderlijk, maar wel toegankelijk."""
|
| 16 |
+
|
| 17 |
+
# Example texts
|
| 18 |
+
EXAMPLES = [
|
| 19 |
+
"Een pekdruppelexperiment is een langetermijnexperiment dat het vloeien van een stuk pek meet over vele jaren. Pek is een verzamelnaam voor een aantal vloeistoffen met een zeer hoge viscositeit, zoals teer en bitumen, die er bij kamertemperatuur uitzien als een vaste stof, maar in feite zeer dik vloeibaar zijn en uiteindelijk druppels vormen.",
|
| 20 |
+
"De kwantummechanica is een natuurkundige theorie die het gedrag beschrijft van materie en energie op de schaal van atomen en subatomaire deeltjes. In tegenstelling tot de klassieke mechanica, waar objecten een bepaalde positie en snelheid hebben, beschrijft de kwantummechanica deeltjes met waarschijnlijkheidsgolven.",
|
| 21 |
+
"Fotosynthese is het biologische proces waarbij planten, algen en sommige bacteriën lichtenergie omzetten in chemische energie. Dit gebeurt in de chloroplasten, waar chlorofyl zonlicht absorbeert en gebruikt om koolstofdioxide en water om te zetten in glucose en zuurstof.",
|
| 22 |
+
"Een algoritme is een eindige reeks goed gedefinieerde instructies om een bepaald probleem op te lossen of een berekening uit te voeren. In de informatica worden algoritmes gebruikt als specificaties voor het uitvoeren van berekeningen, gegevensverwerking, geautomatiseerd redeneren en andere taken."
|
| 23 |
+
]
|
| 24 |
+
|
| 25 |
+
# Global model cache
|
| 26 |
+
MODEL_CACHE: Dict[str, Optional[pipeline]] = {}
|
| 27 |
+
current_model_name = None
|
| 28 |
+
|
| 29 |
+
def clear_gpu_memory():
|
| 30 |
+
"""Clear GPU memory when switching models"""
|
| 31 |
+
if torch.cuda.is_available():
|
| 32 |
+
torch.cuda.empty_cache()
|
| 33 |
+
import gc
|
| 34 |
+
gc.collect()
|
| 35 |
+
|
| 36 |
+
def load_model(model_display_name: str):
|
| 37 |
+
"""Load model with caching to avoid reloading"""
|
| 38 |
+
global current_model_name
|
| 39 |
+
|
| 40 |
+
model_path = MODELS[model_display_name]
|
| 41 |
+
|
| 42 |
+
# If model already cached, return it
|
| 43 |
+
if model_path in MODEL_CACHE and MODEL_CACHE[model_path] is not None:
|
| 44 |
+
current_model_name = model_path
|
| 45 |
+
return MODEL_CACHE[model_path]
|
| 46 |
+
|
| 47 |
+
# Clear previous model if different
|
| 48 |
+
if current_model_name and current_model_name != model_path:
|
| 49 |
+
if current_model_name in MODEL_CACHE:
|
| 50 |
+
del MODEL_CACHE[current_model_name]
|
| 51 |
+
MODEL_CACHE[current_model_name] = None
|
| 52 |
+
clear_gpu_memory()
|
| 53 |
+
|
| 54 |
+
# Load new model
|
| 55 |
+
try:
|
| 56 |
+
model = pipeline(
|
| 57 |
+
"text-generation",
|
| 58 |
+
model=model_path,
|
| 59 |
+
torch_dtype="auto",
|
| 60 |
+
device_map="auto"
|
| 61 |
+
)
|
| 62 |
+
MODEL_CACHE[model_path] = model
|
| 63 |
+
current_model_name = model_path
|
| 64 |
+
return model
|
| 65 |
+
except Exception as e:
|
| 66 |
+
raise gr.Error(f"Failed to load model: {str(e)}")
|
| 67 |
+
|
| 68 |
+
def simplify_text(text: str, model_name: str, show_metrics: bool = True):
|
| 69 |
+
"""Simplify Dutch text using selected model"""
|
| 70 |
+
|
| 71 |
+
if not text.strip():
|
| 72 |
+
return "Voer tekst in om te vereenvoudigen / Enter text to simplify", ""
|
| 73 |
+
|
| 74 |
+
# Load model
|
| 75 |
+
status = f"Model laden / Loading model: {model_name}..."
|
| 76 |
+
yield status, ""
|
| 77 |
+
|
| 78 |
+
model = load_model(model_name)
|
| 79 |
+
model_path = MODELS[model_name]
|
| 80 |
+
|
| 81 |
+
# Format prompt based on model
|
| 82 |
+
if "eurollm" in model_path.lower():
|
| 83 |
+
# EuroLLM performs better without system prompt
|
| 84 |
+
messages = [{
|
| 85 |
+
"role": "user",
|
| 86 |
+
"content": f"Vereenvoudig: {text}"
|
| 87 |
+
}]
|
| 88 |
+
else:
|
| 89 |
+
# Llama and Granite use system prompt
|
| 90 |
+
messages = [
|
| 91 |
+
{"role": "system", "content": SYSTEM_PROMPT},
|
| 92 |
+
{"role": "user", "content": f"Vereenvoudig: {text}"}
|
| 93 |
+
]
|
| 94 |
+
|
| 95 |
+
# Generate with timing
|
| 96 |
+
status = "Tekst vereenvoudigen / Simplifying text..."
|
| 97 |
+
yield status, ""
|
| 98 |
+
|
| 99 |
+
start_time = time.time()
|
| 100 |
+
|
| 101 |
+
try:
|
| 102 |
+
output = model(
|
| 103 |
+
messages,
|
| 104 |
+
max_new_tokens=256,
|
| 105 |
+
return_full_text=False,
|
| 106 |
+
do_sample=False, # Greedy decoding for consistency
|
| 107 |
+
pad_token_id=model.tokenizer.eos_token_id,
|
| 108 |
+
eos_token_id=model.tokenizer.eos_token_id
|
| 109 |
+
)
|
| 110 |
+
|
| 111 |
+
generation_time = time.time() - start_time
|
| 112 |
+
simplified = output[0]["generated_text"].strip()
|
| 113 |
+
|
| 114 |
+
# Calculate metrics
|
| 115 |
+
if show_metrics:
|
| 116 |
+
input_tokens = len(model.tokenizer.encode(text))
|
| 117 |
+
output_tokens = len(model.tokenizer.encode(simplified))
|
| 118 |
+
tokens_per_sec = output_tokens / generation_time if generation_time > 0 else 0
|
| 119 |
+
|
| 120 |
+
metrics = f"""
|
| 121 |
+
**Prestaties / Performance:**
|
| 122 |
+
- Model: {model_name}
|
| 123 |
+
- Invoer tokens / Input tokens: {input_tokens}
|
| 124 |
+
- Uitvoer tokens / Output tokens: {output_tokens}
|
| 125 |
+
- Tijd / Time: {generation_time:.2f}s
|
| 126 |
+
- Snelheid / Speed: {tokens_per_sec:.2f} tokens/s
|
| 127 |
+
"""
|
| 128 |
+
else:
|
| 129 |
+
metrics = ""
|
| 130 |
+
|
| 131 |
+
return simplified, metrics
|
| 132 |
+
|
| 133 |
+
except Exception as e:
|
| 134 |
+
raise gr.Error(f"Fout bij vereenvoudigen / Error simplifying: {str(e)}")
|
| 135 |
+
|
| 136 |
+
def create_interface():
|
| 137 |
+
"""Create Gradio interface"""
|
| 138 |
+
|
| 139 |
+
with gr.Blocks(title="Leesplank Noot - Dutch Text Simplification") as demo:
|
| 140 |
+
gr.Markdown("""
|
| 141 |
+
# 📝 Leesplank Noot - Nederlandse Tekstvereenvoudiging / Dutch Text Simplification
|
| 142 |
+
|
| 143 |
+
Vereenvoudig Nederlandse teksten naar B1-niveau voor betere toegankelijkheid.
|
| 144 |
+
*Simplify Dutch texts to B1 level for better accessibility.*
|
| 145 |
+
|
| 146 |
+
---
|
| 147 |
+
""")
|
| 148 |
+
|
| 149 |
+
with gr.Row():
|
| 150 |
+
with gr.Column(scale=1):
|
| 151 |
+
model_dropdown = gr.Dropdown(
|
| 152 |
+
choices=list(MODELS.keys()),
|
| 153 |
+
value="Granite-3.3-2b (Highest Quality)",
|
| 154 |
+
label="Kies model / Choose model",
|
| 155 |
+
info="Selecteer het model voor vereenvoudiging / Select simplification model"
|
| 156 |
+
)
|
| 157 |
+
|
| 158 |
+
show_metrics = gr.Checkbox(
|
| 159 |
+
value=True,
|
| 160 |
+
label="Toon prestaties / Show performance metrics"
|
| 161 |
+
)
|
| 162 |
+
|
| 163 |
+
with gr.Row():
|
| 164 |
+
with gr.Column(scale=1):
|
| 165 |
+
input_text = gr.Textbox(
|
| 166 |
+
label="Originele tekst / Original text",
|
| 167 |
+
placeholder="Voer hier de te vereenvoudigen tekst in...\nEnter text to simplify here...",
|
| 168 |
+
lines=10
|
| 169 |
+
)
|
| 170 |
+
|
| 171 |
+
simplify_btn = gr.Button(
|
| 172 |
+
"🔄 Vereenvoudig / Simplify",
|
| 173 |
+
variant="primary",
|
| 174 |
+
scale=1
|
| 175 |
+
)
|
| 176 |
+
|
| 177 |
+
with gr.Column(scale=1):
|
| 178 |
+
output_text = gr.Textbox(
|
| 179 |
+
label="Vereenvoudigde tekst / Simplified text",
|
| 180 |
+
lines=10,
|
| 181 |
+
interactive=False
|
| 182 |
+
)
|
| 183 |
+
|
| 184 |
+
metrics_display = gr.Markdown(
|
| 185 |
+
label="Metrics",
|
| 186 |
+
visible=True
|
| 187 |
+
)
|
| 188 |
+
|
| 189 |
+
with gr.Row():
|
| 190 |
+
gr.Examples(
|
| 191 |
+
examples=EXAMPLES,
|
| 192 |
+
inputs=input_text,
|
| 193 |
+
label="Voorbeelden / Examples"
|
| 194 |
+
)
|
| 195 |
+
|
| 196 |
+
with gr.Accordion("ℹ️ Over deze demo / About this demo", open=False):
|
| 197 |
+
gr.Markdown("""
|
| 198 |
+
Deze demo toont drie Nederlandse tekstvereenvoudigingsmodellen ontwikkeld door UWV:
|
| 199 |
+
|
| 200 |
+
- **Granite-3.3-2b**: Hoogste kwaliteit (SARI 67.80)
|
| 201 |
+
- **Llama-3.2-3b**: Gebalanceerde prestaties
|
| 202 |
+
- **EuroLLM-1.7b**: Snelste model (27.5 tokens/s)
|
| 203 |
+
|
| 204 |
+
Alle modellen zijn getraind op 1.89M Nederlandse Wikipedia-vereenvoudigingen en produceren tekst op B1-niveau.
|
| 205 |
+
|
| 206 |
+
*This demo showcases three Dutch text simplification models developed by UWV, trained on 1.89M Dutch Wikipedia simplifications to produce B1-level text.*
|
| 207 |
+
|
| 208 |
+
**Contact**: innovatie@uwv.nl
|
| 209 |
+
""")
|
| 210 |
+
|
| 211 |
+
# Event handlers
|
| 212 |
+
simplify_btn.click(
|
| 213 |
+
fn=simplify_text,
|
| 214 |
+
inputs=[input_text, model_dropdown, show_metrics],
|
| 215 |
+
outputs=[output_text, metrics_display]
|
| 216 |
+
)
|
| 217 |
+
|
| 218 |
+
# Also trigger on Enter in input field
|
| 219 |
+
input_text.submit(
|
| 220 |
+
fn=simplify_text,
|
| 221 |
+
inputs=[input_text, model_dropdown, show_metrics],
|
| 222 |
+
outputs=[output_text, metrics_display]
|
| 223 |
+
)
|
| 224 |
+
|
| 225 |
+
return demo
|
| 226 |
+
|
| 227 |
+
# Initialize and launch
|
| 228 |
+
if __name__ == "__main__":
|
| 229 |
+
demo = create_interface()
|
| 230 |
+
demo.queue(max_size=10)
|
| 231 |
+
demo.launch(
|
| 232 |
+
server_name="0.0.0.0",
|
| 233 |
+
server_port=7860,
|
| 234 |
+
share=False
|
| 235 |
+
)
|
requirements.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
gradio==4.44.0
|
| 2 |
+
transformers==4.45.0
|
| 3 |
+
torch==2.1.0
|
| 4 |
+
accelerate==0.25.0
|
| 5 |
+
sentencepiece==0.2.0
|