Spaces:
Sleeping
Sleeping
Update README.md
Browse files
README.md
CHANGED
|
@@ -63,7 +63,17 @@ from langchain_community.document_loaders import PyPDFLoader
|
|
| 63 |
2. Modified prompt format β slight improvement
|
| 64 |
3. Switched to FLAN-T5-XL β OOM error
|
| 65 |
|
| 66 |
-
**
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 67 |
|
| 68 |
## π Limitations
|
| 69 |
|
|
|
|
| 63 |
2. Modified prompt format β slight improvement
|
| 64 |
3. Switched to FLAN-T5-XL β OOM error
|
| 65 |
|
| 66 |
+
**Solution:** Switched to Zephyr-7B-beta, which produces comprehensive answers.
|
| 67 |
+
|
| 68 |
+
### Challenge 4: Hugging Face Spaces Python 3.13 Migration
|
| 69 |
+
**Problem:** Space failed on startup with `ModuleNotFoundError: No module named 'audioop'`
|
| 70 |
+
**Cause:** Hugging Face Spaces updated to Python 3.13, which removed the deprecated `audioop` module from the standard library. Gradio 4.x depended on `pydub`, which required `audioop`.
|
| 71 |
+
**Solution:** Upgraded to Gradio 6.3.0, which includes Python 3.13 compatibility fixes.
|
| 72 |
+
|
| 73 |
+
### Challenge 5: Inference API Changes
|
| 74 |
+
**Problem:** `InferenceClient.text_generation()` failed with "task not supported" error.
|
| 75 |
+
**Cause:** The Hugging Face Inference API routing changed, requiring conversational models to use the `chat_completion` endpoint.
|
| 76 |
+
**Solution:** Refactored from raw prompt templates to the structured `chat_completion()` API with message arrays.
|
| 77 |
|
| 78 |
## π Limitations
|
| 79 |
|