Spaces:
Sleeping
Sleeping
Upload 5 files
Browse files- DYNAMIC_CACHE_FIX_SUMMARY.md +133 -0
- TROUBLESHOOTING_DYNAMIC_CACHE.md +408 -0
- fix_local_model.py +203 -0
- llm.py +32 -8
- requirements.txt +7 -2
DYNAMIC_CACHE_FIX_SUMMARY.md
ADDED
|
@@ -0,0 +1,133 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# DynamicCache Error Fix - Quick Summary
|
| 2 |
+
|
| 3 |
+
## Problem
|
| 4 |
+
```
|
| 5 |
+
ERROR: Local model error: 'DynamicCache' object has no attribute 'seen_tokens'
|
| 6 |
+
```
|
| 7 |
+
|
| 8 |
+
**Result**: Quality Score 0.00 for all transcripts, no analysis extracted.
|
| 9 |
+
|
| 10 |
+
---
|
| 11 |
+
|
| 12 |
+
## Root Cause
|
| 13 |
+
Version incompatibility in transformers library's caching mechanism during model generation.
|
| 14 |
+
|
| 15 |
+
---
|
| 16 |
+
|
| 17 |
+
## ✅ Fixes Applied
|
| 18 |
+
|
| 19 |
+
### 1. Code Fix (llm.py)
|
| 20 |
+
Added `use_cache=False` parameter to disable problematic caching:
|
| 21 |
+
|
| 22 |
+
```python
|
| 23 |
+
outputs = query_llm_local.model.generate(
|
| 24 |
+
**inputs,
|
| 25 |
+
max_new_tokens=max_tokens,
|
| 26 |
+
temperature=temperature,
|
| 27 |
+
do_sample=temperature > 0,
|
| 28 |
+
pad_token_id=query_llm_local.tokenizer.eos_token_id,
|
| 29 |
+
use_cache=False # ← Fixes DynamicCache error
|
| 30 |
+
)
|
| 31 |
+
```
|
| 32 |
+
|
| 33 |
+
**Trade-off**: ~10-20% slower generation, but error-free.
|
| 34 |
+
|
| 35 |
+
### 2. Enhanced Error Handling
|
| 36 |
+
- Better error messages with specific guidance
|
| 37 |
+
- Automatic detection of DynamicCache issues
|
| 38 |
+
- Recommendations for next steps
|
| 39 |
+
|
| 40 |
+
### 3. Diagnostic Tool
|
| 41 |
+
Created `fix_local_model.py` to diagnose and resolve issues automatically.
|
| 42 |
+
|
| 43 |
+
---
|
| 44 |
+
|
| 45 |
+
## 🚀 Recommended Actions (Pick One)
|
| 46 |
+
|
| 47 |
+
### Option A: Upgrade Transformers (Quick Fix)
|
| 48 |
+
```bash
|
| 49 |
+
pip install --upgrade transformers
|
| 50 |
+
python -c "import transformers; print(transformers.__version__)"
|
| 51 |
+
```
|
| 52 |
+
**Expected**: Version 4.36.0 or higher
|
| 53 |
+
|
| 54 |
+
### Option B: Use HuggingFace API (Easiest)
|
| 55 |
+
```bash
|
| 56 |
+
# Get token from: https://huggingface.co/settings/tokens
|
| 57 |
+
export HUGGINGFACE_TOKEN='hf_your_token_here'
|
| 58 |
+
export USE_HF_API=True
|
| 59 |
+
```
|
| 60 |
+
|
| 61 |
+
### Option C: Use LMStudio (Best for Offline)
|
| 62 |
+
1. Download: https://lmstudio.ai/
|
| 63 |
+
2. Install and start server
|
| 64 |
+
3. Set environment:
|
| 65 |
+
```bash
|
| 66 |
+
export USE_LMSTUDIO=True
|
| 67 |
+
export LMSTUDIO_URL=http://localhost:1234
|
| 68 |
+
```
|
| 69 |
+
|
| 70 |
+
### Option D: Run Diagnostic
|
| 71 |
+
```bash
|
| 72 |
+
python fix_local_model.py
|
| 73 |
+
```
|
| 74 |
+
Automatically detects and guides you through fixes.
|
| 75 |
+
|
| 76 |
+
---
|
| 77 |
+
|
| 78 |
+
## Verification
|
| 79 |
+
|
| 80 |
+
After applying any fix, test:
|
| 81 |
+
```bash
|
| 82 |
+
python -c "from llm import query_llm_local; print(query_llm_local('Test', max_tokens=10))"
|
| 83 |
+
```
|
| 84 |
+
|
| 85 |
+
**Success**: Returns text (not error message)
|
| 86 |
+
**Still failing**: Try Option B or C above
|
| 87 |
+
|
| 88 |
+
---
|
| 89 |
+
|
| 90 |
+
## Files Modified/Created
|
| 91 |
+
|
| 92 |
+
✅ **Modified**:
|
| 93 |
+
- `llm.py` - Added use_cache=False and better error handling
|
| 94 |
+
- `requirements.txt` - Added version compatibility notes
|
| 95 |
+
|
| 96 |
+
✅ **Created**:
|
| 97 |
+
- `fix_local_model.py` - Diagnostic and fix script
|
| 98 |
+
- `TROUBLESHOOTING_DYNAMIC_CACHE.md` - Comprehensive guide (13KB)
|
| 99 |
+
- `DYNAMIC_CACHE_FIX_SUMMARY.md` - This quick reference
|
| 100 |
+
|
| 101 |
+
---
|
| 102 |
+
|
| 103 |
+
## Next Steps
|
| 104 |
+
|
| 105 |
+
1. **Choose a solution** (A, B, C, or D above)
|
| 106 |
+
2. **Apply the fix**
|
| 107 |
+
3. **Restart your application**
|
| 108 |
+
4. **Process a test transcript**
|
| 109 |
+
5. **Verify Quality Score > 0.00**
|
| 110 |
+
|
| 111 |
+
If issues persist, see `TROUBLESHOOTING_DYNAMIC_CACHE.md` for detailed guidance.
|
| 112 |
+
|
| 113 |
+
---
|
| 114 |
+
|
| 115 |
+
## Quick Reference
|
| 116 |
+
|
| 117 |
+
| Issue | Fix |
|
| 118 |
+
|-------|-----|
|
| 119 |
+
| Quality Score 0.00 | LLM is failing - apply fixes above |
|
| 120 |
+
| DynamicCache error | use_cache=False (already applied) + upgrade transformers |
|
| 121 |
+
| Slow processing | Use HF API (Option B) for speed |
|
| 122 |
+
| Offline required | Use LMStudio (Option C) |
|
| 123 |
+
| Not sure what to do | Run diagnostic (Option D) |
|
| 124 |
+
|
| 125 |
+
---
|
| 126 |
+
|
| 127 |
+
## Support
|
| 128 |
+
|
| 129 |
+
- **Full troubleshooting**: See `TROUBLESHOOTING_DYNAMIC_CACHE.md`
|
| 130 |
+
- **Run diagnostic**: `python fix_local_model.py`
|
| 131 |
+
- **Check enhancements**: See `ENHANCEMENTS.md`
|
| 132 |
+
|
| 133 |
+
✅ **The code fix is already applied - you just need to upgrade dependencies or switch backends!**
|
TROUBLESHOOTING_DYNAMIC_CACHE.md
ADDED
|
@@ -0,0 +1,408 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Troubleshooting: DynamicCache 'seen_tokens' Error
|
| 2 |
+
|
| 3 |
+
## Error Message
|
| 4 |
+
```
|
| 5 |
+
ERROR: Local model error: 'DynamicCache' object has no attribute 'seen_tokens'
|
| 6 |
+
```
|
| 7 |
+
|
| 8 |
+
## What This Means
|
| 9 |
+
|
| 10 |
+
This error occurs when using local model inference (Phi-3, Llama, Mistral, etc.) with the `transformers` library. It's caused by a version incompatibility in the internal caching mechanism used during text generation.
|
| 11 |
+
|
| 12 |
+
**Impact**:
|
| 13 |
+
- Transcripts process but get Quality Score 0.00
|
| 14 |
+
- LLM analysis fails for all chunks
|
| 15 |
+
- No insights extracted from transcripts
|
| 16 |
+
- System still generates outputs but they're empty/error messages
|
| 17 |
+
|
| 18 |
+
---
|
| 19 |
+
|
| 20 |
+
## Root Cause
|
| 21 |
+
|
| 22 |
+
The `transformers` library changed its internal `Cache` implementation between versions:
|
| 23 |
+
- **Older versions (< 4.36)**: Used simpler cache without `seen_tokens` attribute
|
| 24 |
+
- **Newer versions (>= 4.36)**: Introduced `DynamicCache` with `seen_tokens` attribute
|
| 25 |
+
- **Version mismatch**: Code expects one format but library provides another
|
| 26 |
+
|
| 27 |
+
The error specifically occurs during the `model.generate()` call when the library tries to manage the key-value cache for efficient generation.
|
| 28 |
+
|
| 29 |
+
---
|
| 30 |
+
|
| 31 |
+
## Quick Fix (Applied)
|
| 32 |
+
|
| 33 |
+
**File**: `llm.py` (lines 460-480)
|
| 34 |
+
|
| 35 |
+
The code has been updated with:
|
| 36 |
+
|
| 37 |
+
```python
|
| 38 |
+
# Fix for DynamicCache 'seen_tokens' error
|
| 39 |
+
outputs = query_llm_local.model.generate(
|
| 40 |
+
**inputs,
|
| 41 |
+
max_new_tokens=max_tokens,
|
| 42 |
+
temperature=temperature,
|
| 43 |
+
do_sample=temperature > 0,
|
| 44 |
+
pad_token_id=query_llm_local.tokenizer.eos_token_id,
|
| 45 |
+
use_cache=False # ← Disable caching to avoid DynamicCache errors
|
| 46 |
+
)
|
| 47 |
+
```
|
| 48 |
+
|
| 49 |
+
**What this does**: Disables the key-value caching mechanism entirely, forcing the model to recompute at each step.
|
| 50 |
+
|
| 51 |
+
**Trade-off**: Slightly slower generation (~10-20%) but avoids the error completely.
|
| 52 |
+
|
| 53 |
+
---
|
| 54 |
+
|
| 55 |
+
## Solutions (In Order of Preference)
|
| 56 |
+
|
| 57 |
+
### Solution 1: Upgrade Transformers Library ✅ **RECOMMENDED**
|
| 58 |
+
|
| 59 |
+
```bash
|
| 60 |
+
pip install --upgrade transformers
|
| 61 |
+
```
|
| 62 |
+
|
| 63 |
+
**Expected version**: 4.36.0 or higher
|
| 64 |
+
|
| 65 |
+
**Verify installation**:
|
| 66 |
+
```bash
|
| 67 |
+
python -c "import transformers; print(transformers.__version__)"
|
| 68 |
+
```
|
| 69 |
+
|
| 70 |
+
**Expected output**: `4.36.0` or higher
|
| 71 |
+
|
| 72 |
+
**Why this works**: Newer versions have the `seen_tokens` attribute properly implemented.
|
| 73 |
+
|
| 74 |
+
---
|
| 75 |
+
|
| 76 |
+
### Solution 2: Use HuggingFace API Instead 🚀 **EASIEST**
|
| 77 |
+
|
| 78 |
+
Instead of running models locally, use HuggingFace's cloud API.
|
| 79 |
+
|
| 80 |
+
**Advantages**:
|
| 81 |
+
- No local model loading (saves RAM)
|
| 82 |
+
- Faster processing
|
| 83 |
+
- No compatibility issues
|
| 84 |
+
- Access to larger, better models
|
| 85 |
+
|
| 86 |
+
**Setup**:
|
| 87 |
+
|
| 88 |
+
1. Get a HuggingFace token: https://huggingface.co/settings/tokens
|
| 89 |
+
2. Create token with "Read" access
|
| 90 |
+
3. Set environment variables:
|
| 91 |
+
|
| 92 |
+
```bash
|
| 93 |
+
export HUGGINGFACE_TOKEN='hf_your_token_here'
|
| 94 |
+
export USE_HF_API=True
|
| 95 |
+
```
|
| 96 |
+
|
| 97 |
+
Or in `.env` file:
|
| 98 |
+
```
|
| 99 |
+
HUGGINGFACE_TOKEN=hf_your_token_here
|
| 100 |
+
USE_HF_API=True
|
| 101 |
+
```
|
| 102 |
+
|
| 103 |
+
**Verify**:
|
| 104 |
+
```bash
|
| 105 |
+
python -c "import os; print('HF Token:', os.getenv('HUGGINGFACE_TOKEN')[:20])"
|
| 106 |
+
```
|
| 107 |
+
|
| 108 |
+
---
|
| 109 |
+
|
| 110 |
+
### Solution 3: Use LMStudio 🖥️ **BEST FOR OFFLINE**
|
| 111 |
+
|
| 112 |
+
LMStudio provides a GUI for running local models with better compatibility.
|
| 113 |
+
|
| 114 |
+
**Advantages**:
|
| 115 |
+
- Better compatibility than raw transformers
|
| 116 |
+
- Easy model management with GUI
|
| 117 |
+
- Local/offline processing
|
| 118 |
+
- No API costs
|
| 119 |
+
|
| 120 |
+
**Setup**:
|
| 121 |
+
|
| 122 |
+
1. Download LMStudio: https://lmstudio.ai/
|
| 123 |
+
2. Install and open LMStudio
|
| 124 |
+
3. Download a model (recommended: Phi-3-mini or Mistral-7B)
|
| 125 |
+
4. Start the local server:
|
| 126 |
+
- Open LMStudio
|
| 127 |
+
- Go to "Server" tab
|
| 128 |
+
- Click "Start Server"
|
| 129 |
+
- Default: http://localhost:1234
|
| 130 |
+
|
| 131 |
+
5. Set environment variables:
|
| 132 |
+
|
| 133 |
+
```bash
|
| 134 |
+
export USE_LMSTUDIO=True
|
| 135 |
+
export LMSTUDIO_URL=http://localhost:1234
|
| 136 |
+
```
|
| 137 |
+
|
| 138 |
+
Or in `.env` file:
|
| 139 |
+
```
|
| 140 |
+
USE_LMSTUDIO=True
|
| 141 |
+
LMSTUDIO_URL=http://localhost:1234
|
| 142 |
+
```
|
| 143 |
+
|
| 144 |
+
**Verify**:
|
| 145 |
+
```bash
|
| 146 |
+
curl http://localhost:1234/v1/models
|
| 147 |
+
```
|
| 148 |
+
|
| 149 |
+
Should return JSON with available models.
|
| 150 |
+
|
| 151 |
+
---
|
| 152 |
+
|
| 153 |
+
### Solution 4: Use Diagnostic Script
|
| 154 |
+
|
| 155 |
+
Run the diagnostic script to automatically detect and fix issues:
|
| 156 |
+
|
| 157 |
+
```bash
|
| 158 |
+
python fix_local_model.py
|
| 159 |
+
```
|
| 160 |
+
|
| 161 |
+
This script will:
|
| 162 |
+
1. Check your transformers version
|
| 163 |
+
2. Test local model functionality
|
| 164 |
+
3. Provide specific recommendations
|
| 165 |
+
4. Guide you through setup alternatives
|
| 166 |
+
|
| 167 |
+
**Example output**:
|
| 168 |
+
```
|
| 169 |
+
==================================================================
|
| 170 |
+
Local Model DynamicCache Error Fix
|
| 171 |
+
==================================================================
|
| 172 |
+
|
| 173 |
+
[Step 1] Diagnosing current environment...
|
| 174 |
+
✓ Transformers version: 4.35.0
|
| 175 |
+
⚠️ Transformers 4.35.0 is outdated
|
| 176 |
+
Recommended: >= 4.36.0
|
| 177 |
+
|
| 178 |
+
[Step 2] Attempting to fix...
|
| 179 |
+
Upgrade transformers library? (y/n): y
|
| 180 |
+
✓ Transformers upgraded successfully
|
| 181 |
+
✓ Please restart your application
|
| 182 |
+
```
|
| 183 |
+
|
| 184 |
+
---
|
| 185 |
+
|
| 186 |
+
## Verification Steps
|
| 187 |
+
|
| 188 |
+
After applying any fix, verify it works:
|
| 189 |
+
|
| 190 |
+
### Test 1: Check Versions
|
| 191 |
+
```bash
|
| 192 |
+
python -c "import transformers, torch; print(f'Transformers: {transformers.__version__}'); print(f'PyTorch: {torch.__version__}')"
|
| 193 |
+
```
|
| 194 |
+
|
| 195 |
+
**Expected**:
|
| 196 |
+
```
|
| 197 |
+
Transformers: 4.36.0 or higher
|
| 198 |
+
PyTorch: 2.1.0 or higher
|
| 199 |
+
```
|
| 200 |
+
|
| 201 |
+
### Test 2: Quick LLM Test
|
| 202 |
+
```bash
|
| 203 |
+
python -c "from llm import query_llm_local; print(query_llm_local('Test', max_tokens=10))"
|
| 204 |
+
```
|
| 205 |
+
|
| 206 |
+
**Expected**: Some text output (not an error message)
|
| 207 |
+
|
| 208 |
+
### Test 3: Full Integration Test
|
| 209 |
+
Process a single transcript through the app and check:
|
| 210 |
+
- Quality Score > 0.00 ✓
|
| 211 |
+
- Structured data extracted ✓
|
| 212 |
+
- No DynamicCache errors in logs ✓
|
| 213 |
+
|
| 214 |
+
---
|
| 215 |
+
|
| 216 |
+
## Understanding Quality Score 0.00
|
| 217 |
+
|
| 218 |
+
If you see `Quality Score: 0.00` for all transcripts, it means:
|
| 219 |
+
|
| 220 |
+
**Cause**: LLM analysis is failing (likely due to this error)
|
| 221 |
+
|
| 222 |
+
**How Quality Score is calculated** (validation.py):
|
| 223 |
+
```python
|
| 224 |
+
def validate_transcript_quality(full_text, structured_data, interviewee_type):
|
| 225 |
+
score = 0.0
|
| 226 |
+
|
| 227 |
+
# Text length check (0.3 points)
|
| 228 |
+
if len(full_text) > 100: score += 0.3
|
| 229 |
+
|
| 230 |
+
# Structured data check (0.4 points)
|
| 231 |
+
if has_structured_data: score += 0.4
|
| 232 |
+
|
| 233 |
+
# Specificity check (0.3 points)
|
| 234 |
+
if has_specific_terms: score += 0.3
|
| 235 |
+
|
| 236 |
+
return score, issues
|
| 237 |
+
```
|
| 238 |
+
|
| 239 |
+
**If LLM fails**:
|
| 240 |
+
- `full_text` = "[Error] Local model failed: ..."
|
| 241 |
+
- `structured_data` = {} (empty)
|
| 242 |
+
- **Result**: Score = 0.00
|
| 243 |
+
|
| 244 |
+
**Fix**: Resolve the DynamicCache error → LLM works → Quality Score improves to 0.7-1.0
|
| 245 |
+
|
| 246 |
+
---
|
| 247 |
+
|
| 248 |
+
## Prevention & Best Practices
|
| 249 |
+
|
| 250 |
+
### 1. Pin Dependency Versions
|
| 251 |
+
In `requirements.txt`:
|
| 252 |
+
```
|
| 253 |
+
transformers>=4.36.0,<5.0.0
|
| 254 |
+
torch>=2.1.0,<2.3.0
|
| 255 |
+
```
|
| 256 |
+
|
| 257 |
+
**Why**: Ensures compatible versions are installed together
|
| 258 |
+
|
| 259 |
+
### 2. Use Virtual Environments
|
| 260 |
+
```bash
|
| 261 |
+
python -m venv venv
|
| 262 |
+
source venv/bin/activate # Linux/Mac
|
| 263 |
+
# or
|
| 264 |
+
venv\Scripts\activate # Windows
|
| 265 |
+
pip install -r requirements.txt
|
| 266 |
+
```
|
| 267 |
+
|
| 268 |
+
**Why**: Isolates dependencies, prevents conflicts with other projects
|
| 269 |
+
|
| 270 |
+
### 3. Regular Updates
|
| 271 |
+
```bash
|
| 272 |
+
pip install --upgrade transformers torch accelerate
|
| 273 |
+
```
|
| 274 |
+
|
| 275 |
+
**When**:
|
| 276 |
+
- After any error
|
| 277 |
+
- Monthly maintenance
|
| 278 |
+
- Before deploying to production
|
| 279 |
+
|
| 280 |
+
### 4. Prefer Cloud APIs for Production
|
| 281 |
+
|
| 282 |
+
For production deployments:
|
| 283 |
+
- **Use HuggingFace API** for reliability
|
| 284 |
+
- **Use LMStudio** for on-premise/offline requirements
|
| 285 |
+
- **Avoid local transformers** unless you control the environment
|
| 286 |
+
|
| 287 |
+
---
|
| 288 |
+
|
| 289 |
+
## Environment-Specific Notes
|
| 290 |
+
|
| 291 |
+
### Docker / HuggingFace Spaces
|
| 292 |
+
```dockerfile
|
| 293 |
+
# In Dockerfile or requirements
|
| 294 |
+
RUN pip install transformers>=4.36.0 torch>=2.1.0 accelerate
|
| 295 |
+
```
|
| 296 |
+
|
| 297 |
+
### Windows
|
| 298 |
+
```powershell
|
| 299 |
+
# Install in PowerShell with admin rights
|
| 300 |
+
pip install --upgrade transformers torch accelerate
|
| 301 |
+
```
|
| 302 |
+
|
| 303 |
+
### Linux / WSL
|
| 304 |
+
```bash
|
| 305 |
+
pip3 install --upgrade transformers torch accelerate
|
| 306 |
+
```
|
| 307 |
+
|
| 308 |
+
### macOS
|
| 309 |
+
```bash
|
| 310 |
+
pip3 install --upgrade transformers torch accelerate
|
| 311 |
+
```
|
| 312 |
+
|
| 313 |
+
---
|
| 314 |
+
|
| 315 |
+
## Still Having Issues?
|
| 316 |
+
|
| 317 |
+
### Debug Mode
|
| 318 |
+
Enable detailed logging:
|
| 319 |
+
```python
|
| 320 |
+
import os
|
| 321 |
+
os.environ["DEBUG_MODE"] = "True"
|
| 322 |
+
```
|
| 323 |
+
|
| 324 |
+
Then check logs for detailed error messages.
|
| 325 |
+
|
| 326 |
+
### Check Full Error Stack
|
| 327 |
+
Look for the full traceback in console output:
|
| 328 |
+
```
|
| 329 |
+
ERROR: Local model error: 'DynamicCache' object has no attribute 'seen_tokens'
|
| 330 |
+
Traceback (most recent call last):
|
| 331 |
+
File "llm.py", line 459, in query_llm_local
|
| 332 |
+
outputs = query_llm_local.model.generate(...)
|
| 333 |
+
...
|
| 334 |
+
```
|
| 335 |
+
|
| 336 |
+
### Contact Support
|
| 337 |
+
If the issue persists:
|
| 338 |
+
1. Run diagnostic script: `python fix_local_model.py`
|
| 339 |
+
2. Capture full logs
|
| 340 |
+
3. Note your environment:
|
| 341 |
+
- OS (Windows/Linux/Mac)
|
| 342 |
+
- Python version
|
| 343 |
+
- Transformers version
|
| 344 |
+
- PyTorch version
|
| 345 |
+
4. Report issue with logs
|
| 346 |
+
|
| 347 |
+
---
|
| 348 |
+
|
| 349 |
+
## Summary Checklist
|
| 350 |
+
|
| 351 |
+
- [ ] Updated transformers: `pip install --upgrade transformers`
|
| 352 |
+
- [ ] Verified version: `python -c "import transformers; print(transformers.__version__)"`
|
| 353 |
+
- [ ] Applied code fix (use_cache=False) - already done in llm.py
|
| 354 |
+
- [ ] Tested with sample transcript
|
| 355 |
+
- [ ] Quality Score > 0.00 ✓
|
| 356 |
+
- [ ] OR: Switched to HF API / LMStudio instead
|
| 357 |
+
|
| 358 |
+
**If all checked**: ✓ Problem solved!
|
| 359 |
+
|
| 360 |
+
**If still failing**: Use HF API or LMStudio (Solutions 2-3 above)
|
| 361 |
+
|
| 362 |
+
---
|
| 363 |
+
|
| 364 |
+
## Related Files
|
| 365 |
+
|
| 366 |
+
- `llm.py` - Contains the fix (lines 460-480)
|
| 367 |
+
- `fix_local_model.py` - Diagnostic script
|
| 368 |
+
- `requirements.txt` - Dependency versions
|
| 369 |
+
- `ENHANCEMENTS.md` - Recent improvements documentation
|
| 370 |
+
|
| 371 |
+
---
|
| 372 |
+
|
| 373 |
+
## Technical Details (For Developers)
|
| 374 |
+
|
| 375 |
+
### Why `use_cache=False` Works
|
| 376 |
+
|
| 377 |
+
**Normal generation with caching**:
|
| 378 |
+
```python
|
| 379 |
+
# Step 1: Generate token 1
|
| 380 |
+
cache = DynamicCache() # Create cache
|
| 381 |
+
cache.seen_tokens = 1 # Track position
|
| 382 |
+
|
| 383 |
+
# Step 2: Generate token 2
|
| 384 |
+
cache.seen_tokens = 2 # Update position
|
| 385 |
+
# ... uses previous key/values from cache
|
| 386 |
+
|
| 387 |
+
# Faster but requires cache.seen_tokens attribute
|
| 388 |
+
```
|
| 389 |
+
|
| 390 |
+
**Generation without caching**:
|
| 391 |
+
```python
|
| 392 |
+
# Step 1: Generate token 1
|
| 393 |
+
# No cache used
|
| 394 |
+
|
| 395 |
+
# Step 2: Generate token 2
|
| 396 |
+
# Recompute everything from scratch
|
| 397 |
+
|
| 398 |
+
# Slower (~10-20%) but no cache dependencies
|
| 399 |
+
```
|
| 400 |
+
|
| 401 |
+
### Future Improvements
|
| 402 |
+
|
| 403 |
+
We're monitoring:
|
| 404 |
+
- Transformers library updates
|
| 405 |
+
- Alternative caching implementations
|
| 406 |
+
- Model-specific optimizations
|
| 407 |
+
|
| 408 |
+
Stay updated: Check `ENHANCEMENTS.md` for latest improvements.
|
fix_local_model.py
ADDED
|
@@ -0,0 +1,203 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/usr/bin/env python3
|
| 2 |
+
"""
|
| 3 |
+
Fix Local Model DynamicCache Error
|
| 4 |
+
===================================
|
| 5 |
+
|
| 6 |
+
This script diagnoses and fixes the 'DynamicCache' object has no attribute 'seen_tokens' error.
|
| 7 |
+
|
| 8 |
+
Root Cause:
|
| 9 |
+
-----------
|
| 10 |
+
The error occurs due to version incompatibility between transformers library versions.
|
| 11 |
+
Newer versions (>= 4.36) changed the internal cache mechanism.
|
| 12 |
+
|
| 13 |
+
Solutions (in order of preference):
|
| 14 |
+
------------------------------------
|
| 15 |
+
1. Upgrade transformers to latest stable version
|
| 16 |
+
2. Use HuggingFace API instead of local model
|
| 17 |
+
3. Use LMStudio for local inference
|
| 18 |
+
4. Disable caching in generation (already implemented in llm.py)
|
| 19 |
+
"""
|
| 20 |
+
|
| 21 |
+
import subprocess
|
| 22 |
+
import sys
|
| 23 |
+
import os
|
| 24 |
+
|
| 25 |
+
def check_transformers_version():
|
| 26 |
+
"""Check installed transformers version"""
|
| 27 |
+
try:
|
| 28 |
+
import transformers
|
| 29 |
+
version = transformers.__version__
|
| 30 |
+
print(f"✓ Transformers version: {version}")
|
| 31 |
+
|
| 32 |
+
# Parse version
|
| 33 |
+
major, minor, patch = map(int, version.split('.')[:3])
|
| 34 |
+
|
| 35 |
+
if major < 4 or (major == 4 and minor < 36):
|
| 36 |
+
print(f"⚠️ Transformers {version} is outdated")
|
| 37 |
+
print(f" Recommended: >= 4.36.0")
|
| 38 |
+
return False
|
| 39 |
+
elif major == 4 and minor >= 36 and minor < 40:
|
| 40 |
+
print(f"✓ Transformers {version} should work")
|
| 41 |
+
return True
|
| 42 |
+
else:
|
| 43 |
+
print(f"✓ Transformers {version} is recent")
|
| 44 |
+
return True
|
| 45 |
+
|
| 46 |
+
except ImportError:
|
| 47 |
+
print("✗ Transformers not installed")
|
| 48 |
+
return False
|
| 49 |
+
except Exception as e:
|
| 50 |
+
print(f"✗ Error checking transformers: {e}")
|
| 51 |
+
return False
|
| 52 |
+
|
| 53 |
+
def check_torch_version():
|
| 54 |
+
"""Check PyTorch version"""
|
| 55 |
+
try:
|
| 56 |
+
import torch
|
| 57 |
+
version = torch.__version__
|
| 58 |
+
print(f"✓ PyTorch version: {version}")
|
| 59 |
+
print(f" CUDA available: {torch.cuda.is_available()}")
|
| 60 |
+
return True
|
| 61 |
+
except ImportError:
|
| 62 |
+
print("✗ PyTorch not installed")
|
| 63 |
+
return False
|
| 64 |
+
|
| 65 |
+
def upgrade_transformers():
|
| 66 |
+
"""Upgrade transformers to latest version"""
|
| 67 |
+
print("\n[1] Upgrading transformers library...")
|
| 68 |
+
try:
|
| 69 |
+
subprocess.check_call([
|
| 70 |
+
sys.executable, "-m", "pip", "install",
|
| 71 |
+
"--upgrade", "transformers"
|
| 72 |
+
])
|
| 73 |
+
print("✓ Transformers upgraded successfully")
|
| 74 |
+
return True
|
| 75 |
+
except subprocess.CalledProcessError as e:
|
| 76 |
+
print(f"✗ Failed to upgrade transformers: {e}")
|
| 77 |
+
return False
|
| 78 |
+
|
| 79 |
+
def setup_hf_api():
|
| 80 |
+
"""Guide user to setup HuggingFace API"""
|
| 81 |
+
print("\n[2] Setup HuggingFace API (Alternative to local model)")
|
| 82 |
+
print("-" * 60)
|
| 83 |
+
print("1. Get HF token: https://huggingface.co/settings/tokens")
|
| 84 |
+
print("2. Create a token with 'Read' access")
|
| 85 |
+
print("3. Set environment variable:")
|
| 86 |
+
print(" export HUGGINGFACE_TOKEN='your_token_here'")
|
| 87 |
+
print(" export USE_HF_API=True")
|
| 88 |
+
print("")
|
| 89 |
+
print("Or add to .env file:")
|
| 90 |
+
print(" HUGGINGFACE_TOKEN=your_token_here")
|
| 91 |
+
print(" USE_HF_API=True")
|
| 92 |
+
|
| 93 |
+
def setup_lmstudio():
|
| 94 |
+
"""Guide user to setup LMStudio"""
|
| 95 |
+
print("\n[3] Setup LMStudio (Alternative local inference)")
|
| 96 |
+
print("-" * 60)
|
| 97 |
+
print("1. Download LMStudio: https://lmstudio.ai/")
|
| 98 |
+
print("2. Download a model (recommended: Phi-3 or Mistral)")
|
| 99 |
+
print("3. Start the local server (in LMStudio)")
|
| 100 |
+
print("4. Set environment variable:")
|
| 101 |
+
print(" export USE_LMSTUDIO=True")
|
| 102 |
+
print(" export LMSTUDIO_URL=http://localhost:1234")
|
| 103 |
+
print("")
|
| 104 |
+
print("Or add to .env file:")
|
| 105 |
+
print(" USE_LMSTUDIO=True")
|
| 106 |
+
print(" LMSTUDIO_URL=http://localhost:1234")
|
| 107 |
+
|
| 108 |
+
def test_local_model():
|
| 109 |
+
"""Test local model with the fix"""
|
| 110 |
+
print("\n[4] Testing local model with DynamicCache fix...")
|
| 111 |
+
print("-" * 60)
|
| 112 |
+
|
| 113 |
+
try:
|
| 114 |
+
# Import after any upgrades
|
| 115 |
+
from llm import query_llm_local
|
| 116 |
+
|
| 117 |
+
# Simple test
|
| 118 |
+
test_prompt = "Hello, this is a test. Please respond with 'OK'."
|
| 119 |
+
result = query_llm_local(test_prompt, max_tokens=50)
|
| 120 |
+
|
| 121 |
+
if "[Error]" not in result:
|
| 122 |
+
print(f"✓ Local model working!")
|
| 123 |
+
print(f" Response: {result[:100]}")
|
| 124 |
+
return True
|
| 125 |
+
else:
|
| 126 |
+
print(f"✗ Local model still failing:")
|
| 127 |
+
print(f" {result}")
|
| 128 |
+
return False
|
| 129 |
+
|
| 130 |
+
except Exception as e:
|
| 131 |
+
print(f"✗ Test failed: {e}")
|
| 132 |
+
return False
|
| 133 |
+
|
| 134 |
+
def clear_model_cache():
|
| 135 |
+
"""Clear cached model to force reload"""
|
| 136 |
+
print("\n[5] Clearing model cache...")
|
| 137 |
+
try:
|
| 138 |
+
from llm import query_llm_local
|
| 139 |
+
if hasattr(query_llm_local, 'model'):
|
| 140 |
+
delattr(query_llm_local, 'model')
|
| 141 |
+
if hasattr(query_llm_local, 'tokenizer'):
|
| 142 |
+
delattr(query_llm_local, 'tokenizer')
|
| 143 |
+
print("✓ Model cache cleared")
|
| 144 |
+
return True
|
| 145 |
+
except Exception as e:
|
| 146 |
+
print(f"✗ Failed to clear cache: {e}")
|
| 147 |
+
return False
|
| 148 |
+
|
| 149 |
+
def main():
|
| 150 |
+
print("="*70)
|
| 151 |
+
print("Local Model DynamicCache Error Fix")
|
| 152 |
+
print("="*70)
|
| 153 |
+
|
| 154 |
+
print("\n[Step 1] Diagnosing current environment...")
|
| 155 |
+
print("-" * 60)
|
| 156 |
+
|
| 157 |
+
transformers_ok = check_transformers_version()
|
| 158 |
+
torch_ok = check_torch_version()
|
| 159 |
+
|
| 160 |
+
if not transformers_ok:
|
| 161 |
+
print("\n[Step 2] Attempting to fix...")
|
| 162 |
+
response = input("\nUpgrade transformers library? (y/n): ")
|
| 163 |
+
if response.lower() == 'y':
|
| 164 |
+
if upgrade_transformers():
|
| 165 |
+
print("\n✓ Please restart your application to use the upgraded version")
|
| 166 |
+
return
|
| 167 |
+
|
| 168 |
+
print("\n[Step 3] Testing current setup...")
|
| 169 |
+
clear_model_cache()
|
| 170 |
+
if test_local_model():
|
| 171 |
+
print("\n" + "="*70)
|
| 172 |
+
print("✓ SUCCESS! Local model is working")
|
| 173 |
+
print("="*70)
|
| 174 |
+
return
|
| 175 |
+
|
| 176 |
+
print("\n[Step 4] Alternative Solutions")
|
| 177 |
+
print("="*70)
|
| 178 |
+
print("\nLocal model is not working. Consider these alternatives:\n")
|
| 179 |
+
|
| 180 |
+
setup_hf_api()
|
| 181 |
+
print()
|
| 182 |
+
setup_lmstudio()
|
| 183 |
+
|
| 184 |
+
print("\n" + "="*70)
|
| 185 |
+
print("Recommended Action:")
|
| 186 |
+
print("="*70)
|
| 187 |
+
print("1. Use HuggingFace API (easiest, cloud-based)")
|
| 188 |
+
print(" - Fast and reliable")
|
| 189 |
+
print(" - Requires API token (free)")
|
| 190 |
+
print(" - Set USE_HF_API=True")
|
| 191 |
+
print("")
|
| 192 |
+
print("2. Use LMStudio (best for offline/privacy)")
|
| 193 |
+
print(" - Run models locally with GUI")
|
| 194 |
+
print(" - Better compatibility than transformers")
|
| 195 |
+
print(" - Set USE_LMSTUDIO=True")
|
| 196 |
+
print("")
|
| 197 |
+
print("3. Upgrade transformers and try again")
|
| 198 |
+
print(" - pip install --upgrade transformers torch")
|
| 199 |
+
print(" - May require compatible PyTorch version")
|
| 200 |
+
print("="*70)
|
| 201 |
+
|
| 202 |
+
if __name__ == "__main__":
|
| 203 |
+
main()
|
llm.py
CHANGED
|
@@ -456,13 +456,28 @@ def query_llm_local(prompt: str, max_tokens: int = 1500) -> str:
|
|
| 456 |
|
| 457 |
# Generate with proper parameters
|
| 458 |
logger.info(f"Generating with local model (max_tokens={max_tokens}, temp={temperature})")
|
| 459 |
-
|
| 460 |
-
|
| 461 |
-
|
| 462 |
-
|
| 463 |
-
|
| 464 |
-
|
| 465 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 466 |
|
| 467 |
# Decode only the new tokens (not the prompt)
|
| 468 |
response = query_llm_local.tokenizer.decode(
|
|
@@ -478,7 +493,16 @@ def query_llm_local(prompt: str, max_tokens: int = 1500) -> str:
|
|
| 478 |
error_details = traceback.format_exc()
|
| 479 |
logger.error(f"Local model error: {e}")
|
| 480 |
logger.debug(error_details)
|
| 481 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 482 |
|
| 483 |
|
| 484 |
def query_llm(
|
|
|
|
| 456 |
|
| 457 |
# Generate with proper parameters
|
| 458 |
logger.info(f"Generating with local model (max_tokens={max_tokens}, temp={temperature})")
|
| 459 |
+
|
| 460 |
+
# Fix for DynamicCache 'seen_tokens' error in newer transformers versions
|
| 461 |
+
# Use cache_implementation parameter or disable cache to avoid compatibility issues
|
| 462 |
+
try:
|
| 463 |
+
outputs = query_llm_local.model.generate(
|
| 464 |
+
**inputs,
|
| 465 |
+
max_new_tokens=max_tokens,
|
| 466 |
+
temperature=temperature,
|
| 467 |
+
do_sample=temperature > 0,
|
| 468 |
+
pad_token_id=query_llm_local.tokenizer.eos_token_id,
|
| 469 |
+
use_cache=False # Disable caching to avoid DynamicCache errors
|
| 470 |
+
)
|
| 471 |
+
except (TypeError, AttributeError) as cache_error:
|
| 472 |
+
# Fallback: If cache parameter fails, try without cache parameter
|
| 473 |
+
logger.warning(f"Cache parameter issue, retrying without cache: {cache_error}")
|
| 474 |
+
outputs = query_llm_local.model.generate(
|
| 475 |
+
**inputs,
|
| 476 |
+
max_new_tokens=max_tokens,
|
| 477 |
+
temperature=temperature,
|
| 478 |
+
do_sample=temperature > 0,
|
| 479 |
+
pad_token_id=query_llm_local.tokenizer.eos_token_id
|
| 480 |
+
)
|
| 481 |
|
| 482 |
# Decode only the new tokens (not the prompt)
|
| 483 |
response = query_llm_local.tokenizer.decode(
|
|
|
|
| 493 |
error_details = traceback.format_exc()
|
| 494 |
logger.error(f"Local model error: {e}")
|
| 495 |
logger.debug(error_details)
|
| 496 |
+
|
| 497 |
+
# Check if this is a DynamicCache error - provide specific guidance
|
| 498 |
+
if "DynamicCache" in str(e) or "seen_tokens" in str(e):
|
| 499 |
+
logger.error("DynamicCache compatibility issue detected")
|
| 500 |
+
logger.error("Solution: Update transformers library or use HF API/LMStudio instead")
|
| 501 |
+
logger.error(" pip install --upgrade transformers")
|
| 502 |
+
logger.error(" OR set USE_HF_API=True or USE_LMSTUDIO=True in environment")
|
| 503 |
+
|
| 504 |
+
# Return a structured error that won't break the pipeline
|
| 505 |
+
return f"[Error] Local model failed: {str(e)[:100]}. Try using HF API or LMStudio instead."
|
| 506 |
|
| 507 |
|
| 508 |
def query_llm(
|
requirements.txt
CHANGED
|
@@ -43,8 +43,13 @@ python-dotenv>=1.0.0 # .env file loading (optional - we have manual loader)
|
|
| 43 |
# ============================================================================
|
| 44 |
# LOCAL MODEL INFERENCE (For HuggingFace Spaces deployment)
|
| 45 |
# ============================================================================
|
| 46 |
-
|
| 47 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 48 |
accelerate>=0.25.0 # For device_map="auto" and efficient loading
|
| 49 |
sentencepiece>=0.1.99 # Tokenizer support for some models
|
| 50 |
protobuf>=3.20.0 # Required by some tokenizers
|
|
|
|
| 43 |
# ============================================================================
|
| 44 |
# LOCAL MODEL INFERENCE (For HuggingFace Spaces deployment)
|
| 45 |
# ============================================================================
|
| 46 |
+
# NOTE: For DynamicCache compatibility, use transformers >= 4.36.0
|
| 47 |
+
# If you get "'DynamicCache' object has no attribute 'seen_tokens'" error:
|
| 48 |
+
# 1. Run: pip install --upgrade transformers
|
| 49 |
+
# 2. Or use HF API: set USE_HF_API=True
|
| 50 |
+
# 3. Or use LMStudio: set USE_LMSTUDIO=True
|
| 51 |
+
transformers>=4.36.0,<5.0.0 # For local model loading (Phi-3, etc.) - version pinned for cache compatibility
|
| 52 |
+
torch>=2.1.0,<2.3.0 # PyTorch for model inference - compatible with transformers
|
| 53 |
accelerate>=0.25.0 # For device_map="auto" and efficient loading
|
| 54 |
sentencepiece>=0.1.99 # Tokenizer support for some models
|
| 55 |
protobuf>=3.20.0 # Required by some tokenizers
|