aladhefafalquran commited on
Commit
6b31f29
·
1 Parent(s): 80d371c

Fix: Enable Gradio queue for generator functions and suppress T5 warnings

Browse files

Fixes:
✅ Added demo.queue() to enable generator function support
✅ Added warnings filter to suppress T5 tokenizer FutureWarning
✅ Maintains 100% free functionality

The generator functions (yield statements) require queue to be enabled in Gradio.
This allows real-time progress updates to the user during processing.

🤖 Generated with Claude Code
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

Files changed (1) hide show
  1. app.py +5 -0
app.py CHANGED
@@ -1,10 +1,14 @@
1
  import os
2
  import re
 
3
  import gradio as gr
4
  import fitz
5
  from transformers import pipeline
6
  import torch
7
 
 
 
 
8
  # Initialize models
9
  print("Loading AI models...")
10
  device = 0 if torch.cuda.is_available() else -1
@@ -581,4 +585,5 @@ with gr.Blocks(title="Ultimate Exam Prep - Study Guide Generator", theme=gr.them
581
  """)
582
 
583
  if __name__ == "__main__":
 
584
  demo.launch()
 
1
  import os
2
  import re
3
+ import warnings
4
  import gradio as gr
5
  import fitz
6
  from transformers import pipeline
7
  import torch
8
 
9
+ # Suppress T5 tokenizer warnings
10
+ warnings.filterwarnings("ignore", category=FutureWarning, module="transformers")
11
+
12
  # Initialize models
13
  print("Loading AI models...")
14
  device = 0 if torch.cuda.is_available() else -1
 
585
  """)
586
 
587
  if __name__ == "__main__":
588
+ demo.queue() # Enable queue for generator functions
589
  demo.launch()