prival / README.md
EugeneXiang's picture
Update README.md
71625f5 verified
---
version: 0.1.9
license: mit
library_name: prival
tags:
- prompt-validation
- nlp
- python
widget:
- text: Check it out!
src: https://huggingface.co/EugeneXiang/prival
language:
- en
- zh
---
## πŸš€ PriVAL: Prompt Input VALidation Toolkit
**PriVAL** is a lightweight and extensible toolkit for evaluating the quality of prompts for LLMs.
It provides **multi-dimensional scoring and improvement suggestions**, helping you write better prompts that deliver more reliable model outputs.
---
## ✨ Features
- **Multi-dimensional scoring**: Covers clarity, ambiguity, injection risk, relevance, and more.
- **Pluggable detectors**: Each dimension is modularβ€”easy to extend or customize.
- **One-line evaluation**: Just `evaluate_prompt(prompt)` to get structured scores and suggestions.
- **Flexible config**: Easily enable/disable dimensions and adjust weights or thresholds.
- **Report generation**: Output in JSON / Markdown / HTML formats.
---
## πŸ“¦ Installation
```bash
# Basic (recommended)
pip install prival
# Install a specific version
pip install prival==0.1.9
# Full version (includes spaCy-based analysis)
pip install prival[full]
```
---
```markdown
⚠️ macOS or lightweight environments may encounter issues with spaCy or language-tool-python.
If you’re not using syntax/structure-related dimensions, install the base version only.
```
---
## πŸ§ͺ Quick Example
```python
from prival import evaluate_prompt
prompt = "Please write a gentle yet firm resignation letter."
result = evaluate_prompt(prompt)
print(result["total_score"])
print(result["clarity"])
print(result["suggestions"])
```
---
## πŸ› οΈ Project Structure
```
prival/
β”œβ”€β”€ config.yaml # Global config: dimensions, weights, thresholds
β”œβ”€β”€ core.py # Main logic: detector routing + aggregation
β”œβ”€β”€ detectors/ # Each validation dimension as standalone module
β”œβ”€β”€ scoring.py # Weighted score logic
β”œβ”€β”€ report.py # Output as Markdown / HTML
β”œβ”€β”€ utils/ # NLP helpers (syntax, keywords, embeddings)
└── tests/ # Unit tests + example prompts
```
---
## 🧩 Config Example
```
enabled_dimensions:
- clarity
- ambiguity
- step_guidance
- injection_risk
# ...
weights:
clarity: 0.15
ambiguity: 0.10
step_guidance: 0.10
injection_risk: 0.15
# ...
thresholds:
clarity: 0.6
injection_risk: 0.5
```
## πŸ“ˆ Output + Reporting
Each result contains:
β€’ score: a float value (0.0 ~ 1.0)
β€’ suggestions: concrete suggestions for improvement
Example output:
```json
{
"clarity": { "score": 0.9, "suggestions": [] },
"step_guidance": { "score": 0.3, "suggestions": ["Add step-by-step hints."] },
"total_score": 0.72
}
```
To export a nice visual report:
```python
from prival.report import generate_html
generate_html(result, "report.html")
```
---
## πŸ€– CLI (coming soon)
```bash
prival-cli evaluate "Your prompt text here"
```
---
## πŸ“š License
MIT License.
Feel free to fork, extend, or integrate into your own LLM projects.
---
## πŸ’Œ Feedback
Issues, suggestions, or PRs are warmly welcomed:
https://github.com/EugeneXiang
or ping me on Hugging Face
---
Happy prompting! πŸŽ‰
Let your prompts shine ✨