Spaces:
Sleeping
Sleeping
File size: 2,306 Bytes
ef12530 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 |
# π Quick Start Guide
## Installation & Launch (3 steps)
1. **Install dependencies:**
```bash
pip install -r requirements.txt
```
2. **Launch the app:**
```bash
python launch.py
```
3. **Open your browser** to http://localhost:7860
## Alternative Launch Methods
If the above doesn't work, try these:
```bash
# Method 1: Full startup script
python run.py
# Method 2: Direct app launch
python app.py
# Method 3: With dependency installation
python run.py --install
```
## First Time Usage
1. **Enter text** in the input box (try: "The quick brown fox jumps over the lazy dog.")
2. **Select a model** (default: gpt2)
3. **Choose model type** (decoder for GPT-like, encoder for BERT-like)
4. **Click "Analyze"**
You'll see:
- π’ Green tokens = Low perplexity (model is confident)
- π΄ Red tokens = High perplexity (model is uncertain)
## Troubleshooting
**Common Issues:**
- **"Module not found"** β Run: `pip install -r requirements.txt`
- **"Model download failed"** β Check internet connection
- **"Launch failed"** β Try: `python launch.py` or `python app.py`
- **Out of memory** β Use smaller models like `distilgpt2` or `distilbert-base-uncased`
**GPU Support:**
- Automatically uses GPU if available
- Falls back to CPU if no GPU found
## Example Models to Try
**Decoder (GPT-like):**
- `gpt2` - Standard GPT-2
- `distilgpt2` - Smaller, faster
- `microsoft/DialoGPT-small` - Conversational
**Encoder (BERT-like):**
- `bert-base-uncased` - Standard BERT
- `distilbert-base-uncased` - Smaller, faster
- `roberta-base` - Improved BERT
## Need Help?
Run the test suite:
```bash
python test_app.py
```
Or try the command-line demo:
```bash
python demo.py
```
**Still having issues?** Check the full README.md for detailed instructions.
## β
Recent Updates
**Ultra-Simplified Interface!**
- Removed MLM probability slider for cleaner interface
- Removed iterations slider - single comprehensive analysis per run
- Encoder models now analyze all tokens for complete results
- Decoder models provide single-pass perplexity calculation
- Tokens are properly colored by perplexity (green=confident, red=uncertain)
- If you see black/white tokens, try refreshing the browser
- Test the colors with: `python simple_color_test.py` (creates color_test.html) |