Spaces:
Building
Building
gpu support
Browse files- BATCH_PERFORMANCE_OPTIMIZATION.md +128 -0
- test_debug_mode_gpu.py +86 -0
- test_gpu_integration.py +56 -0
- test_gpu_support.py +178 -0
- text_analyzer/app_config.py +6 -1
- text_analyzer/base_analyzer.py +136 -5
- text_analyzer/lexical_sophistication.py +99 -17
- text_analyzer/pos_parser.py +3 -2
- web_app/app.py +4 -1
- web_app/debug_utils.py +117 -1
- web_app/handlers/analysis_handlers.py +204 -26
BATCH_PERFORMANCE_OPTIMIZATION.md
ADDED
|
@@ -0,0 +1,128 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Batch Processing Performance Optimization
|
| 2 |
+
|
| 3 |
+
## Performance Issues Identified
|
| 4 |
+
|
| 5 |
+
### 1. **Multiple Analysis Calls Per File (Biggest Issue)**
|
| 6 |
+
The original implementation made 3 separate calls to `analyze_text()` for each file:
|
| 7 |
+
- One for Content Words (CW)
|
| 8 |
+
- One for Function Words (FW)
|
| 9 |
+
- One for n-grams (without word type filter)
|
| 10 |
+
|
| 11 |
+
Each call runs the entire SpaCy pipeline (tokenization, POS tagging, dependency parsing), essentially **tripling** the processing time.
|
| 12 |
+
|
| 13 |
+
### 2. **Memory Accumulation**
|
| 14 |
+
- All results stored in memory with detailed token information
|
| 15 |
+
- No streaming or chunking capabilities
|
| 16 |
+
- Everything stays in memory until batch completes
|
| 17 |
+
|
| 18 |
+
### 3. **Default Model Size**
|
| 19 |
+
- Default SpaCy model is 'trf' (transformer-based), which is much slower than 'md'
|
| 20 |
+
- Found in `session_manager.py`: `'model_size': 'trf'`
|
| 21 |
+
|
| 22 |
+
## Optimizations Implemented
|
| 23 |
+
|
| 24 |
+
### Phase 1: Single-Pass Analysis (70% Performance Gain)
|
| 25 |
+
|
| 26 |
+
**Changes Made:**
|
| 27 |
+
|
| 28 |
+
1. **Modified `analyze_text()` method** to support `separate_word_types` parameter
|
| 29 |
+
- Processes both CW and FW in a single pass through the text
|
| 30 |
+
- Collects statistics for both word types simultaneously
|
| 31 |
+
- N-grams are processed in the same pass
|
| 32 |
+
|
| 33 |
+
2. **Updated batch processing handlers** to use single-pass analysis:
|
| 34 |
+
```python
|
| 35 |
+
# OLD: 3 separate calls
|
| 36 |
+
for word_type in ['CW', 'FW']:
|
| 37 |
+
analysis = analyzer.analyze_text(text, ...)
|
| 38 |
+
full_analysis = analyzer.analyze_text(text, ...) # for n-grams
|
| 39 |
+
|
| 40 |
+
# NEW: Single optimized call
|
| 41 |
+
analysis = analyzer.analyze_text(
|
| 42 |
+
text_content,
|
| 43 |
+
selected_indices,
|
| 44 |
+
separate_word_types=True # Process CW/FW separately in same pass
|
| 45 |
+
)
|
| 46 |
+
```
|
| 47 |
+
|
| 48 |
+
3. **Added optimized batch method** `analyze_batch_memory()`:
|
| 49 |
+
- Works directly with in-memory file contents
|
| 50 |
+
- Supports all new analysis parameters
|
| 51 |
+
- Maintains backward compatibility
|
| 52 |
+
|
| 53 |
+
## Performance Recommendations
|
| 54 |
+
|
| 55 |
+
### 1. **Use 'md' Model Instead of 'trf'**
|
| 56 |
+
The transformer model ('trf') is significantly slower. For batch processing, consider using 'md':
|
| 57 |
+
- 3-5x faster processing
|
| 58 |
+
- Still provides good accuracy for lexical sophistication analysis
|
| 59 |
+
|
| 60 |
+
### 2. **Enable Smart Defaults**
|
| 61 |
+
Smart defaults optimize which measures to compute, reducing unnecessary calculations.
|
| 62 |
+
|
| 63 |
+
### 3. **For Very Large Batches**
|
| 64 |
+
Consider implementing:
|
| 65 |
+
- Chunk processing (process N files at a time)
|
| 66 |
+
- Parallel processing using multiprocessing
|
| 67 |
+
- Results streaming to disk instead of memory accumulation
|
| 68 |
+
|
| 69 |
+
## Expected Performance Gains
|
| 70 |
+
|
| 71 |
+
With the optimizations implemented:
|
| 72 |
+
- **~70% reduction** in processing time from eliminating redundant analysis calls
|
| 73 |
+
- **Additional 20-30%** possible by switching from 'trf' to 'md' model
|
| 74 |
+
- **Memory usage** remains similar but could be optimized further with streaming
|
| 75 |
+
|
| 76 |
+
## How to Use the Optimized Version
|
| 77 |
+
|
| 78 |
+
The optimizations are transparent to users. The batch processing will automatically use the single-pass analysis when:
|
| 79 |
+
- No specific word type filter is selected
|
| 80 |
+
- Processing files that need both CW and FW analysis
|
| 81 |
+
|
| 82 |
+
For legacy compatibility, the old `analyze_batch()` method has been updated to use the optimized approach internally.
|
| 83 |
+
|
| 84 |
+
## GPU Status Monitoring in Debug Mode
|
| 85 |
+
|
| 86 |
+
The web app now includes comprehensive GPU status information in debug mode. To access:
|
| 87 |
+
|
| 88 |
+
1. Enable "🐛 Debug Mode" in the sidebar
|
| 89 |
+
2. Expand the "GPU Status" section
|
| 90 |
+
|
| 91 |
+
### Features
|
| 92 |
+
|
| 93 |
+
**PyTorch/CUDA Information:**
|
| 94 |
+
- PyTorch installation and version
|
| 95 |
+
- CUDA availability and version
|
| 96 |
+
- Number of GPUs and their names
|
| 97 |
+
- GPU memory usage (allocated, reserved, free)
|
| 98 |
+
|
| 99 |
+
**SpaCy GPU Configuration:**
|
| 100 |
+
- SpaCy GPU enablement status
|
| 101 |
+
- Current GPU device being used
|
| 102 |
+
- spacy-transformers installation status
|
| 103 |
+
|
| 104 |
+
**Active Model GPU Status:**
|
| 105 |
+
- Current model's device configuration
|
| 106 |
+
- GPU optimization status (mixed precision, batch sizes)
|
| 107 |
+
- SpaCy version information
|
| 108 |
+
|
| 109 |
+
**Performance Tips:**
|
| 110 |
+
- Optimization recommendations
|
| 111 |
+
- Common troubleshooting guidance
|
| 112 |
+
|
| 113 |
+
### Benefits
|
| 114 |
+
|
| 115 |
+
This integrated GPU monitoring eliminates the need for the separate `test_gpu_support.py` script for most use cases. Developers can now:
|
| 116 |
+
|
| 117 |
+
- Quickly verify GPU availability without running external scripts
|
| 118 |
+
- Monitor GPU memory usage during batch processing
|
| 119 |
+
- Confirm that models are correctly utilizing GPU acceleration
|
| 120 |
+
- Troubleshoot performance issues more effectively
|
| 121 |
+
|
| 122 |
+
### Usage Example
|
| 123 |
+
|
| 124 |
+
When processing large batches with transformer models:
|
| 125 |
+
1. Enable debug mode to monitor GPU utilization
|
| 126 |
+
2. Check that the model is using GPU (not CPU fallback)
|
| 127 |
+
3. Monitor memory usage to prevent out-of-memory errors
|
| 128 |
+
4. Adjust batch sizes based on available GPU memory
|
test_debug_mode_gpu.py
ADDED
|
@@ -0,0 +1,86 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/usr/bin/env python3
|
| 2 |
+
"""
|
| 3 |
+
Test script to verify the GPU status display in debug mode works correctly.
|
| 4 |
+
This tests the functionality without running the full Streamlit app.
|
| 5 |
+
"""
|
| 6 |
+
|
| 7 |
+
import sys
|
| 8 |
+
import os
|
| 9 |
+
|
| 10 |
+
# Add parent directory to path
|
| 11 |
+
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
| 12 |
+
|
| 13 |
+
from web_app.debug_utils import show_gpu_status
|
| 14 |
+
import streamlit as st
|
| 15 |
+
|
| 16 |
+
# Mock streamlit components for testing
|
| 17 |
+
class MockStreamlit:
|
| 18 |
+
"""Mock Streamlit for testing without running the actual app."""
|
| 19 |
+
|
| 20 |
+
@staticmethod
|
| 21 |
+
def write(*args, **kwargs):
|
| 22 |
+
print(*args)
|
| 23 |
+
|
| 24 |
+
@staticmethod
|
| 25 |
+
def columns(n):
|
| 26 |
+
return [MockContext()] * n
|
| 27 |
+
|
| 28 |
+
@staticmethod
|
| 29 |
+
def expander(title, expanded=False):
|
| 30 |
+
print(f"\n=== {title} ===")
|
| 31 |
+
return MockContext()
|
| 32 |
+
|
| 33 |
+
@staticmethod
|
| 34 |
+
def info(text):
|
| 35 |
+
print(f"[INFO] {text}")
|
| 36 |
+
|
| 37 |
+
@staticmethod
|
| 38 |
+
def warning(text):
|
| 39 |
+
print(f"[WARNING] {text}")
|
| 40 |
+
|
| 41 |
+
@staticmethod
|
| 42 |
+
def error(text):
|
| 43 |
+
print(f"[ERROR] {text}")
|
| 44 |
+
|
| 45 |
+
class session_state:
|
| 46 |
+
analyzer = None
|
| 47 |
+
parser = None
|
| 48 |
+
|
| 49 |
+
class MockContext:
|
| 50 |
+
"""Mock context manager for with statements."""
|
| 51 |
+
def __enter__(self):
|
| 52 |
+
return self
|
| 53 |
+
|
| 54 |
+
def __exit__(self, *args):
|
| 55 |
+
pass
|
| 56 |
+
|
| 57 |
+
def write(self, *args, **kwargs):
|
| 58 |
+
print(*args)
|
| 59 |
+
|
| 60 |
+
def test_gpu_status_display():
|
| 61 |
+
"""Test the GPU status display functionality."""
|
| 62 |
+
print("Testing GPU Status Display Function")
|
| 63 |
+
print("=" * 50)
|
| 64 |
+
|
| 65 |
+
# Replace streamlit with mock for testing
|
| 66 |
+
import web_app.debug_utils
|
| 67 |
+
web_app.debug_utils.st = MockStreamlit()
|
| 68 |
+
|
| 69 |
+
# Import the function after mocking
|
| 70 |
+
from web_app.debug_utils import show_gpu_status
|
| 71 |
+
|
| 72 |
+
try:
|
| 73 |
+
# Test the function
|
| 74 |
+
show_gpu_status()
|
| 75 |
+
print("\n✅ GPU status display function executed successfully!")
|
| 76 |
+
|
| 77 |
+
except Exception as e:
|
| 78 |
+
print(f"\n❌ Error in GPU status display: {str(e)}")
|
| 79 |
+
import traceback
|
| 80 |
+
traceback.print_exc()
|
| 81 |
+
|
| 82 |
+
print("\n" + "=" * 50)
|
| 83 |
+
print("Test completed!")
|
| 84 |
+
|
| 85 |
+
if __name__ == "__main__":
|
| 86 |
+
test_gpu_status_display()
|
test_gpu_integration.py
ADDED
|
@@ -0,0 +1,56 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/usr/bin/env python3
|
| 2 |
+
"""
|
| 3 |
+
Test GPU status integration with analyzers.
|
| 4 |
+
Verifies that GPU information is correctly reported through the web interface.
|
| 5 |
+
"""
|
| 6 |
+
|
| 7 |
+
import sys
|
| 8 |
+
import os
|
| 9 |
+
|
| 10 |
+
# Add parent directory to path
|
| 11 |
+
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
| 12 |
+
|
| 13 |
+
from text_analyzer.lexical_sophistication import LexicalSophisticationAnalyzer
|
| 14 |
+
from text_analyzer.pos_parser import POSParser
|
| 15 |
+
|
| 16 |
+
def test_analyzer_gpu_info():
|
| 17 |
+
"""Test that analyzers properly report GPU information."""
|
| 18 |
+
|
| 19 |
+
print("Testing Analyzer GPU Information")
|
| 20 |
+
print("=" * 50)
|
| 21 |
+
|
| 22 |
+
# Test Lexical Sophistication Analyzer
|
| 23 |
+
print("\n1. Testing LexicalSophisticationAnalyzer:")
|
| 24 |
+
try:
|
| 25 |
+
analyzer = LexicalSophisticationAnalyzer(language="en", model_size="trf")
|
| 26 |
+
model_info = analyzer.get_model_info()
|
| 27 |
+
|
| 28 |
+
print(f" Model: {model_info['name']}")
|
| 29 |
+
print(f" Device: {model_info['device']}")
|
| 30 |
+
print(f" GPU Enabled: {model_info['gpu_enabled']}")
|
| 31 |
+
print(f" SpaCy Version: {model_info['version']}")
|
| 32 |
+
print(" ✅ Analyzer GPU info retrieved successfully")
|
| 33 |
+
|
| 34 |
+
except Exception as e:
|
| 35 |
+
print(f" ❌ Error: {str(e)}")
|
| 36 |
+
|
| 37 |
+
# Test POS Parser
|
| 38 |
+
print("\n2. Testing POSParser:")
|
| 39 |
+
try:
|
| 40 |
+
parser = POSParser(language="en", model_size="trf")
|
| 41 |
+
model_info = parser.get_model_info()
|
| 42 |
+
|
| 43 |
+
print(f" Model: {model_info['name']}")
|
| 44 |
+
print(f" Device: {model_info['device']}")
|
| 45 |
+
print(f" GPU Enabled: {model_info['gpu_enabled']}")
|
| 46 |
+
print(f" SpaCy Version: {model_info['version']}")
|
| 47 |
+
print(" ✅ Parser GPU info retrieved successfully")
|
| 48 |
+
|
| 49 |
+
except Exception as e:
|
| 50 |
+
print(f" ❌ Error: {str(e)}")
|
| 51 |
+
|
| 52 |
+
print("\n" + "=" * 50)
|
| 53 |
+
print("Test completed!")
|
| 54 |
+
|
| 55 |
+
if __name__ == "__main__":
|
| 56 |
+
test_analyzer_gpu_info()
|
test_gpu_support.py
ADDED
|
@@ -0,0 +1,178 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/usr/bin/env python3
|
| 2 |
+
"""
|
| 3 |
+
Test script to verify GPU/CUDA support for spaCy processing.
|
| 4 |
+
Run this to check if GPU acceleration is working correctly.
|
| 5 |
+
"""
|
| 6 |
+
|
| 7 |
+
import sys
|
| 8 |
+
import torch
|
| 9 |
+
import spacy
|
| 10 |
+
from text_analyzer.base_analyzer import BaseAnalyzer
|
| 11 |
+
from text_analyzer.lexical_sophistication import LexicalSophisticationAnalyzer
|
| 12 |
+
from text_analyzer.pos_parser import POSParser
|
| 13 |
+
|
| 14 |
+
def check_cuda_availability():
|
| 15 |
+
"""Check if CUDA is available and display GPU information."""
|
| 16 |
+
print("=== CUDA/GPU Information ===")
|
| 17 |
+
|
| 18 |
+
try:
|
| 19 |
+
if torch.cuda.is_available():
|
| 20 |
+
print(f"✓ CUDA is available")
|
| 21 |
+
print(f" - PyTorch version: {torch.__version__}")
|
| 22 |
+
print(f" - CUDA version: {torch.version.cuda}")
|
| 23 |
+
print(f" - Number of GPUs: {torch.cuda.device_count()}")
|
| 24 |
+
|
| 25 |
+
for i in range(torch.cuda.device_count()):
|
| 26 |
+
print(f" - GPU {i}: {torch.cuda.get_device_name(i)}")
|
| 27 |
+
memory_allocated = torch.cuda.memory_allocated(i) / 1024**2
|
| 28 |
+
memory_reserved = torch.cuda.memory_reserved(i) / 1024**2
|
| 29 |
+
print(f" Memory allocated: {memory_allocated:.2f} MB")
|
| 30 |
+
print(f" Memory reserved: {memory_reserved:.2f} MB")
|
| 31 |
+
else:
|
| 32 |
+
print("✗ CUDA is not available")
|
| 33 |
+
print(" - PyTorch is installed but no GPU detected")
|
| 34 |
+
except ImportError:
|
| 35 |
+
print("✗ PyTorch is not installed")
|
| 36 |
+
print(" - GPU support requires PyTorch installation")
|
| 37 |
+
|
| 38 |
+
print()
|
| 39 |
+
|
| 40 |
+
def test_spacy_gpu():
|
| 41 |
+
"""Test if spaCy can use GPU."""
|
| 42 |
+
print("=== SpaCy GPU Configuration ===")
|
| 43 |
+
|
| 44 |
+
try:
|
| 45 |
+
# Try to enable GPU
|
| 46 |
+
gpu_id = spacy.prefer_gpu()
|
| 47 |
+
if gpu_id is not False:
|
| 48 |
+
print(f"✓ SpaCy GPU enabled on device {gpu_id}")
|
| 49 |
+
else:
|
| 50 |
+
print("✗ SpaCy could not enable GPU")
|
| 51 |
+
|
| 52 |
+
# Check if spacy-transformers is installed
|
| 53 |
+
try:
|
| 54 |
+
import spacy_transformers
|
| 55 |
+
print("✓ spacy-transformers is installed")
|
| 56 |
+
except ImportError:
|
| 57 |
+
print("✗ spacy-transformers not installed (required for transformer models)")
|
| 58 |
+
|
| 59 |
+
except Exception as e:
|
| 60 |
+
print(f"✗ Error testing spaCy GPU: {e}")
|
| 61 |
+
|
| 62 |
+
print()
|
| 63 |
+
|
| 64 |
+
def test_analyzer_gpu(language="en", model_size="trf"):
|
| 65 |
+
"""Test analyzer with GPU support."""
|
| 66 |
+
print(f"=== Testing {language.upper()} {model_size.upper()} Model ===")
|
| 67 |
+
|
| 68 |
+
try:
|
| 69 |
+
# Test with automatic GPU detection
|
| 70 |
+
print("1. Testing automatic GPU detection...")
|
| 71 |
+
analyzer = LexicalSophisticationAnalyzer(language=language, model_size=model_size)
|
| 72 |
+
model_info = analyzer.get_model_info()
|
| 73 |
+
print(f" Model: {model_info['name']}")
|
| 74 |
+
print(f" Device: {model_info['device']}")
|
| 75 |
+
print(f" GPU Enabled: {model_info['gpu_enabled']}")
|
| 76 |
+
|
| 77 |
+
# Test processing
|
| 78 |
+
test_text = "The quick brown fox jumps over the lazy dog." if language == "en" else "これはテストです。"
|
| 79 |
+
print(f"\n2. Testing text processing...")
|
| 80 |
+
doc = analyzer.process_document(test_text)
|
| 81 |
+
print(f" ✓ Successfully processed {len(doc)} tokens")
|
| 82 |
+
|
| 83 |
+
# Test with explicit GPU device
|
| 84 |
+
if torch.cuda.is_available():
|
| 85 |
+
print("\n3. Testing explicit GPU device selection...")
|
| 86 |
+
analyzer_gpu = LexicalSophisticationAnalyzer(language=language, model_size=model_size, gpu_device=0)
|
| 87 |
+
model_info_gpu = analyzer_gpu.get_model_info()
|
| 88 |
+
print(f" Device: {model_info_gpu['device']}")
|
| 89 |
+
print(f" GPU Enabled: {model_info_gpu['gpu_enabled']}")
|
| 90 |
+
|
| 91 |
+
# Test with CPU only
|
| 92 |
+
print("\n4. Testing CPU-only mode...")
|
| 93 |
+
analyzer_cpu = LexicalSophisticationAnalyzer(language=language, model_size=model_size, gpu_device=-1)
|
| 94 |
+
model_info_cpu = analyzer_cpu.get_model_info()
|
| 95 |
+
print(f" Device: {model_info_cpu['device']}")
|
| 96 |
+
print(f" GPU Enabled: {model_info_cpu['gpu_enabled']}")
|
| 97 |
+
|
| 98 |
+
except Exception as e:
|
| 99 |
+
print(f"✗ Error testing analyzer: {e}")
|
| 100 |
+
|
| 101 |
+
print()
|
| 102 |
+
|
| 103 |
+
def test_batch_processing_performance():
|
| 104 |
+
"""Test batch processing performance with GPU vs CPU."""
|
| 105 |
+
print("=== Batch Processing Performance Test ===")
|
| 106 |
+
|
| 107 |
+
import time
|
| 108 |
+
|
| 109 |
+
# Generate test texts
|
| 110 |
+
test_texts = [
|
| 111 |
+
"The quick brown fox jumps over the lazy dog. " * 10
|
| 112 |
+
for _ in range(10)
|
| 113 |
+
]
|
| 114 |
+
|
| 115 |
+
try:
|
| 116 |
+
# Test with GPU (if available)
|
| 117 |
+
if torch.cuda.is_available():
|
| 118 |
+
print("1. Testing GPU batch processing...")
|
| 119 |
+
analyzer_gpu = LexicalSophisticationAnalyzer(language="en", model_size="trf", gpu_device=0)
|
| 120 |
+
|
| 121 |
+
start_time = time.time()
|
| 122 |
+
for text in test_texts:
|
| 123 |
+
doc = analyzer_gpu.process_document(text)
|
| 124 |
+
gpu_time = time.time() - start_time
|
| 125 |
+
print(f" GPU processing time: {gpu_time:.2f} seconds")
|
| 126 |
+
print(f" Average per text: {gpu_time/len(test_texts):.3f} seconds")
|
| 127 |
+
|
| 128 |
+
# Test with CPU
|
| 129 |
+
print("\n2. Testing CPU batch processing...")
|
| 130 |
+
analyzer_cpu = LexicalSophisticationAnalyzer(language="en", model_size="trf", gpu_device=-1)
|
| 131 |
+
|
| 132 |
+
start_time = time.time()
|
| 133 |
+
for text in test_texts:
|
| 134 |
+
doc = analyzer_cpu.process_document(text)
|
| 135 |
+
cpu_time = time.time() - start_time
|
| 136 |
+
print(f" CPU processing time: {cpu_time:.2f} seconds")
|
| 137 |
+
print(f" Average per text: {cpu_time/len(test_texts):.3f} seconds")
|
| 138 |
+
|
| 139 |
+
if torch.cuda.is_available():
|
| 140 |
+
speedup = cpu_time / gpu_time
|
| 141 |
+
print(f"\n Speedup: {speedup:.2f}x")
|
| 142 |
+
|
| 143 |
+
except Exception as e:
|
| 144 |
+
print(f"✗ Error in performance test: {e}")
|
| 145 |
+
|
| 146 |
+
print()
|
| 147 |
+
|
| 148 |
+
def main():
|
| 149 |
+
"""Run all GPU tests."""
|
| 150 |
+
print("="*50)
|
| 151 |
+
print("SpaCy GPU Support Test")
|
| 152 |
+
print("="*50)
|
| 153 |
+
print()
|
| 154 |
+
|
| 155 |
+
# Check CUDA availability
|
| 156 |
+
check_cuda_availability()
|
| 157 |
+
|
| 158 |
+
# Test spaCy GPU
|
| 159 |
+
test_spacy_gpu()
|
| 160 |
+
|
| 161 |
+
# Test analyzers with different configurations
|
| 162 |
+
test_analyzer_gpu("en", "trf")
|
| 163 |
+
|
| 164 |
+
# Only test Japanese if the model is installed
|
| 165 |
+
try:
|
| 166 |
+
test_analyzer_gpu("ja", "trf")
|
| 167 |
+
except:
|
| 168 |
+
print("Japanese transformer model not installed, skipping...")
|
| 169 |
+
|
| 170 |
+
# Performance test
|
| 171 |
+
test_batch_processing_performance()
|
| 172 |
+
|
| 173 |
+
print("\n" + "="*50)
|
| 174 |
+
print("Test completed!")
|
| 175 |
+
print("="*50)
|
| 176 |
+
|
| 177 |
+
if __name__ == "__main__":
|
| 178 |
+
main()
|
text_analyzer/app_config.py
CHANGED
|
@@ -26,8 +26,13 @@ class AppConfig:
|
|
| 26 |
DEFAULT_LANGUAGE = "en"
|
| 27 |
DEFAULT_MODEL_SIZE = "md" # Changed from "trf" to be more accessible
|
| 28 |
|
| 29 |
-
#
|
| 30 |
MAX_TOKENS_FOR_VISUALIZATION = 30
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 31 |
DEFAULT_HISTOGRAM_BINS = 25
|
| 32 |
DEFAULT_RANK_BIN_SIZE = 500
|
| 33 |
MAX_NGRAM_SENTENCE_LENGTH = 100
|
|
|
|
| 26 |
DEFAULT_LANGUAGE = "en"
|
| 27 |
DEFAULT_MODEL_SIZE = "md" # Changed from "trf" to be more accessible
|
| 28 |
|
| 29 |
+
# Maximum tokens for visualization (e.g., in dependency parsing)
|
| 30 |
MAX_TOKENS_FOR_VISUALIZATION = 30
|
| 31 |
+
|
| 32 |
+
# GPU Configuration
|
| 33 |
+
USE_GPU_IF_AVAILABLE = True # Automatically use GPU if available
|
| 34 |
+
GPU_DEVICE = None # None for auto-detect, specific device ID (0, 1, ...), or -1 for CPU only
|
| 35 |
+
GPU_BATCH_SIZE_MULTIPLIER = 4 # Increase batch size by this factor when using GPU
|
| 36 |
DEFAULT_HISTOGRAM_BINS = 25
|
| 37 |
DEFAULT_RANK_BIN_SIZE = 500
|
| 38 |
MAX_NGRAM_SENTENCE_LENGTH = 100
|
text_analyzer/base_analyzer.py
CHANGED
|
@@ -8,6 +8,7 @@ from typing import Dict, List, Any, Optional, Iterator, Tuple, TYPE_CHECKING
|
|
| 8 |
import logging
|
| 9 |
import tempfile
|
| 10 |
from pathlib import Path
|
|
|
|
| 11 |
from .app_config import AppConfig
|
| 12 |
from .text_utility import TextUtility
|
| 13 |
|
|
@@ -33,19 +34,22 @@ class BaseAnalyzer:
|
|
| 33 |
Provides shared model loading, document processing, and utility functions.
|
| 34 |
"""
|
| 35 |
|
| 36 |
-
def __init__(self, language: str = None, model_size: str = None):
|
| 37 |
"""
|
| 38 |
Initialize the base analyzer.
|
| 39 |
|
| 40 |
Args:
|
| 41 |
language: Language code ('en' or 'ja')
|
| 42 |
model_size: Model size ('md' or 'trf')
|
|
|
|
| 43 |
"""
|
| 44 |
self.language = language or AppConfig.DEFAULT_LANGUAGE
|
| 45 |
self.model_size = model_size or AppConfig.DEFAULT_MODEL_SIZE
|
|
|
|
| 46 |
self.nlp = None
|
| 47 |
self._model_info = {}
|
| 48 |
self.unidic_enricher = None
|
|
|
|
| 49 |
|
| 50 |
self._load_spacy_model()
|
| 51 |
|
|
@@ -58,8 +62,95 @@ class BaseAnalyzer:
|
|
| 58 |
logger.warning(f"Failed to initialize UniDic enricher: {e}")
|
| 59 |
self.unidic_enricher = None
|
| 60 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 61 |
def _load_spacy_model(self) -> None:
|
| 62 |
-
"""Load appropriate SpaCy model based on language and size."""
|
| 63 |
# Validate combination
|
| 64 |
if not AppConfig.validate_language_model_combination(self.language, self.model_size):
|
| 65 |
raise ValueError(f"Unsupported language/model combination: {self.language}/{self.model_size}")
|
|
@@ -68,19 +159,59 @@ class BaseAnalyzer:
|
|
| 68 |
if not model_name:
|
| 69 |
raise ValueError(f"No model found for language '{self.language}' and size '{self.model_size}'")
|
| 70 |
|
|
|
|
|
|
|
|
|
|
| 71 |
try:
|
| 72 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 73 |
self._model_info = {
|
| 74 |
'name': model_name,
|
| 75 |
'language': self.language,
|
| 76 |
'model_size': self.model_size,
|
| 77 |
-
'version': spacy.__version__
|
|
|
|
|
|
|
| 78 |
}
|
| 79 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 80 |
except OSError as e:
|
| 81 |
error_msg = f"SpaCy model {model_name} not found. Please install it first."
|
| 82 |
logger.error(error_msg)
|
| 83 |
raise OSError(error_msg) from e
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 84 |
|
| 85 |
def get_model_info(self) -> Dict[str, str]:
|
| 86 |
"""
|
|
|
|
| 8 |
import logging
|
| 9 |
import tempfile
|
| 10 |
from pathlib import Path
|
| 11 |
+
import os
|
| 12 |
from .app_config import AppConfig
|
| 13 |
from .text_utility import TextUtility
|
| 14 |
|
|
|
|
| 34 |
Provides shared model loading, document processing, and utility functions.
|
| 35 |
"""
|
| 36 |
|
| 37 |
+
def __init__(self, language: str = None, model_size: str = None, gpu_device: Optional[int] = None):
|
| 38 |
"""
|
| 39 |
Initialize the base analyzer.
|
| 40 |
|
| 41 |
Args:
|
| 42 |
language: Language code ('en' or 'ja')
|
| 43 |
model_size: Model size ('md' or 'trf')
|
| 44 |
+
gpu_device: GPU device ID to use (None for auto-detect, -1 for CPU only)
|
| 45 |
"""
|
| 46 |
self.language = language or AppConfig.DEFAULT_LANGUAGE
|
| 47 |
self.model_size = model_size or AppConfig.DEFAULT_MODEL_SIZE
|
| 48 |
+
self.gpu_device = gpu_device
|
| 49 |
self.nlp = None
|
| 50 |
self._model_info = {}
|
| 51 |
self.unidic_enricher = None
|
| 52 |
+
self._using_gpu = False
|
| 53 |
|
| 54 |
self._load_spacy_model()
|
| 55 |
|
|
|
|
| 62 |
logger.warning(f"Failed to initialize UniDic enricher: {e}")
|
| 63 |
self.unidic_enricher = None
|
| 64 |
|
| 65 |
+
def _detect_gpu_availability(self) -> Tuple[bool, Optional[str], Optional[int]]:
|
| 66 |
+
"""
|
| 67 |
+
Detect if GPU/CUDA is available for spaCy processing.
|
| 68 |
+
|
| 69 |
+
Returns:
|
| 70 |
+
Tuple of (is_available, device_name, device_id)
|
| 71 |
+
"""
|
| 72 |
+
try:
|
| 73 |
+
import torch
|
| 74 |
+
|
| 75 |
+
if torch.cuda.is_available():
|
| 76 |
+
device_count = torch.cuda.device_count()
|
| 77 |
+
if device_count > 0:
|
| 78 |
+
# Use specified device or default to 0
|
| 79 |
+
if self.gpu_device is not None and self.gpu_device >= 0:
|
| 80 |
+
device_id = min(self.gpu_device, device_count - 1)
|
| 81 |
+
else:
|
| 82 |
+
device_id = 0
|
| 83 |
+
|
| 84 |
+
device_name = torch.cuda.get_device_name(device_id)
|
| 85 |
+
return True, device_name, device_id
|
| 86 |
+
|
| 87 |
+
return False, None, None
|
| 88 |
+
|
| 89 |
+
except ImportError:
|
| 90 |
+
logger.debug("PyTorch not available - GPU support disabled")
|
| 91 |
+
return False, None, None
|
| 92 |
+
except Exception as e:
|
| 93 |
+
logger.warning(f"Error detecting GPU: {e}")
|
| 94 |
+
return False, None, None
|
| 95 |
+
|
| 96 |
+
def _configure_gpu_for_spacy(self) -> bool:
|
| 97 |
+
"""
|
| 98 |
+
Configure spaCy to use GPU if available.
|
| 99 |
+
|
| 100 |
+
Returns:
|
| 101 |
+
True if GPU was successfully configured, False otherwise
|
| 102 |
+
"""
|
| 103 |
+
# Check if GPU should be disabled explicitly
|
| 104 |
+
if self.gpu_device == -1:
|
| 105 |
+
logger.info("GPU explicitly disabled by user")
|
| 106 |
+
return False
|
| 107 |
+
|
| 108 |
+
# Check if GPU is disabled via environment variable
|
| 109 |
+
if os.environ.get('SPACY_USE_GPU', '').lower() == 'false':
|
| 110 |
+
logger.info("GPU disabled via SPACY_USE_GPU environment variable")
|
| 111 |
+
return False
|
| 112 |
+
|
| 113 |
+
gpu_available, device_name, device_id = self._detect_gpu_availability()
|
| 114 |
+
|
| 115 |
+
if not gpu_available:
|
| 116 |
+
logger.info("No GPU/CUDA device available - using CPU")
|
| 117 |
+
return False
|
| 118 |
+
|
| 119 |
+
try:
|
| 120 |
+
# Try to set up GPU for spaCy
|
| 121 |
+
spacy.prefer_gpu(gpu_id=device_id)
|
| 122 |
+
logger.info(f"GPU enabled for spaCy - using {device_name} (device {device_id})")
|
| 123 |
+
return True
|
| 124 |
+
|
| 125 |
+
except Exception as e:
|
| 126 |
+
logger.warning(f"Failed to enable GPU for spaCy: {e}")
|
| 127 |
+
return False
|
| 128 |
+
|
| 129 |
+
def _configure_batch_sizes(self) -> None:
|
| 130 |
+
"""Configure optimal batch sizes for GPU processing."""
|
| 131 |
+
if self.model_size == 'trf':
|
| 132 |
+
# Transformer models need smaller batch sizes due to memory constraints
|
| 133 |
+
# But GPU can handle larger batches than CPU
|
| 134 |
+
if hasattr(self.nlp, 'pipe'):
|
| 135 |
+
for pipe_name in self.nlp.pipe_names:
|
| 136 |
+
pipe = self.nlp.get_pipe(pipe_name)
|
| 137 |
+
if hasattr(pipe, 'cfg'):
|
| 138 |
+
# Set batch size based on available GPU memory
|
| 139 |
+
# These are conservative defaults that work on most GPUs
|
| 140 |
+
if pipe_name == 'transformer':
|
| 141 |
+
pipe.cfg['batch_size'] = 128 # Transformer batch size
|
| 142 |
+
else:
|
| 143 |
+
pipe.cfg['batch_size'] = 256 # Other components
|
| 144 |
+
else:
|
| 145 |
+
# Non-transformer models can use larger batches
|
| 146 |
+
if hasattr(self.nlp, 'pipe'):
|
| 147 |
+
for pipe_name in self.nlp.pipe_names:
|
| 148 |
+
pipe = self.nlp.get_pipe(pipe_name)
|
| 149 |
+
if hasattr(pipe, 'cfg'):
|
| 150 |
+
pipe.cfg['batch_size'] = 1024
|
| 151 |
+
|
| 152 |
def _load_spacy_model(self) -> None:
|
| 153 |
+
"""Load appropriate SpaCy model based on language and size with GPU support."""
|
| 154 |
# Validate combination
|
| 155 |
if not AppConfig.validate_language_model_combination(self.language, self.model_size):
|
| 156 |
raise ValueError(f"Unsupported language/model combination: {self.language}/{self.model_size}")
|
|
|
|
| 159 |
if not model_name:
|
| 160 |
raise ValueError(f"No model found for language '{self.language}' and size '{self.model_size}'")
|
| 161 |
|
| 162 |
+
# Configure GPU before loading model
|
| 163 |
+
self._using_gpu = self._configure_gpu_for_spacy()
|
| 164 |
+
|
| 165 |
try:
|
| 166 |
+
# Load model with optimizations for GPU if available
|
| 167 |
+
if self._using_gpu and self.model_size == 'trf':
|
| 168 |
+
# Enable mixed precision for transformer models on GPU
|
| 169 |
+
self.nlp = spacy.load(model_name, config={"components": {"transformer": {"model": {"mixed_precision": True}}}})
|
| 170 |
+
else:
|
| 171 |
+
self.nlp = spacy.load(model_name)
|
| 172 |
+
|
| 173 |
+
# Get GPU info for model info
|
| 174 |
+
gpu_info = "CPU"
|
| 175 |
+
if self._using_gpu:
|
| 176 |
+
gpu_available, device_name, device_id = self._detect_gpu_availability()
|
| 177 |
+
if gpu_available:
|
| 178 |
+
gpu_info = f"GPU ({device_name}, device {device_id})"
|
| 179 |
+
|
| 180 |
self._model_info = {
|
| 181 |
'name': model_name,
|
| 182 |
'language': self.language,
|
| 183 |
'model_size': self.model_size,
|
| 184 |
+
'version': spacy.__version__,
|
| 185 |
+
'device': gpu_info,
|
| 186 |
+
'gpu_enabled': self._using_gpu
|
| 187 |
}
|
| 188 |
+
|
| 189 |
+
logger.info(f"Loaded SpaCy model: {model_name} on {gpu_info}")
|
| 190 |
+
|
| 191 |
+
# Configure batch sizes for optimal GPU performance
|
| 192 |
+
if self._using_gpu and hasattr(self.nlp, 'pipe'):
|
| 193 |
+
# Increase batch size for GPU processing
|
| 194 |
+
self._configure_batch_sizes()
|
| 195 |
+
|
| 196 |
except OSError as e:
|
| 197 |
error_msg = f"SpaCy model {model_name} not found. Please install it first."
|
| 198 |
logger.error(error_msg)
|
| 199 |
raise OSError(error_msg) from e
|
| 200 |
+
except Exception as e:
|
| 201 |
+
logger.error(f"Error loading SpaCy model: {e}")
|
| 202 |
+
# Try fallback to CPU if GPU loading failed
|
| 203 |
+
if self._using_gpu:
|
| 204 |
+
logger.warning("Falling back to CPU after GPU loading failed")
|
| 205 |
+
self._using_gpu = False
|
| 206 |
+
try:
|
| 207 |
+
self.nlp = spacy.load(model_name)
|
| 208 |
+
self._model_info['device'] = 'CPU (fallback)'
|
| 209 |
+
self._model_info['gpu_enabled'] = False
|
| 210 |
+
logger.info(f"Successfully loaded {model_name} on CPU after GPU failure")
|
| 211 |
+
except Exception as cpu_error:
|
| 212 |
+
raise ValueError(f"Failed to load model on both GPU and CPU: {cpu_error}") from cpu_error
|
| 213 |
+
else:
|
| 214 |
+
raise
|
| 215 |
|
| 216 |
def get_model_info(self) -> Dict[str, str]:
|
| 217 |
"""
|
text_analyzer/lexical_sophistication.py
CHANGED
|
@@ -27,15 +27,16 @@ class LexicalSophisticationAnalyzer(BaseAnalyzer):
|
|
| 27 |
Handles tokenization, n-gram generation, and score calculation.
|
| 28 |
"""
|
| 29 |
|
| 30 |
-
def __init__(self, language: str = None, model_size: str = None):
|
| 31 |
"""
|
| 32 |
Initialize analyzer with specified language and model.
|
| 33 |
|
| 34 |
Args:
|
| 35 |
language (str): Language code ('en' for English, 'ja' for Japanese)
|
| 36 |
model_size (str): SpaCy model size ('md' or 'trf')
|
|
|
|
| 37 |
"""
|
| 38 |
-
super().__init__(language, model_size)
|
| 39 |
self.reference_lists = {}
|
| 40 |
|
| 41 |
def load_reference_lists(self, reference_files: Dict[str, Dict[str, Union[str, dict]]]):
|
|
@@ -535,7 +536,8 @@ class LexicalSophisticationAnalyzer(BaseAnalyzer):
|
|
| 535 |
def analyze_text(self, text: str, selected_indices: List[str],
|
| 536 |
apply_log: bool = False, word_type_filter: Optional[str] = None,
|
| 537 |
log_transforms: Optional[Dict[str, List[str]]] = None,
|
| 538 |
-
selected_measures: Optional[Dict[str, List[str]]] = None
|
|
|
|
| 539 |
"""
|
| 540 |
Analyze text and return lexical sophistication scores.
|
| 541 |
|
|
@@ -550,6 +552,7 @@ class LexicalSophisticationAnalyzer(BaseAnalyzer):
|
|
| 550 |
selected_measures: Dict mapping index names to list of measures to compute
|
| 551 |
e.g., {'COCA_spoken_frequency_token': ['frequency', 'range']}
|
| 552 |
If None, computes all available measures for backward compatibility
|
|
|
|
| 553 |
|
| 554 |
Returns:
|
| 555 |
Dictionary containing analysis results
|
|
@@ -708,7 +711,13 @@ class LexicalSophisticationAnalyzer(BaseAnalyzer):
|
|
| 708 |
# Collect for summary statistics (score is already transformed if needed)
|
| 709 |
score = token_detail.get(index_name)
|
| 710 |
if score is not None:
|
| 711 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 712 |
|
| 713 |
results['token_details'].append(token_detail)
|
| 714 |
|
|
@@ -920,9 +929,83 @@ class LexicalSophisticationAnalyzer(BaseAnalyzer):
|
|
| 920 |
|
| 921 |
return results
|
| 922 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 923 |
def analyze_batch(self, file_paths: List[str], selected_indices: List[str],
|
| 924 |
apply_log: bool = False, progress_callback=None) -> pd.DataFrame:
|
| 925 |
"""
|
|
|
|
| 926 |
Analyze multiple text files and return aggregated results.
|
| 927 |
|
| 928 |
Args:
|
|
@@ -942,22 +1025,21 @@ class LexicalSophisticationAnalyzer(BaseAnalyzer):
|
|
| 942 |
with open(file_path, 'r', encoding='utf-8') as f:
|
| 943 |
text = f.read()
|
| 944 |
|
| 945 |
-
#
|
| 946 |
result_row = {'filename': Path(file_path).name}
|
| 947 |
|
| 948 |
-
|
| 949 |
-
|
| 950 |
-
|
| 951 |
-
|
| 952 |
-
|
| 953 |
-
|
| 954 |
-
|
|
|
|
| 955 |
|
| 956 |
-
#
|
| 957 |
-
|
| 958 |
-
|
| 959 |
-
if '_bigram_' in key or '_trigram_' in key:
|
| 960 |
-
result_row[key] = stats['mean']
|
| 961 |
|
| 962 |
batch_results.append(result_row)
|
| 963 |
|
|
|
|
| 27 |
Handles tokenization, n-gram generation, and score calculation.
|
| 28 |
"""
|
| 29 |
|
| 30 |
+
def __init__(self, language: str = None, model_size: str = None, gpu_device: Optional[int] = None):
|
| 31 |
"""
|
| 32 |
Initialize analyzer with specified language and model.
|
| 33 |
|
| 34 |
Args:
|
| 35 |
language (str): Language code ('en' for English, 'ja' for Japanese)
|
| 36 |
model_size (str): SpaCy model size ('md' or 'trf')
|
| 37 |
+
gpu_device (int, optional): GPU device ID to use (None for auto-detect, -1 for CPU only)
|
| 38 |
"""
|
| 39 |
+
super().__init__(language, model_size, gpu_device)
|
| 40 |
self.reference_lists = {}
|
| 41 |
|
| 42 |
def load_reference_lists(self, reference_files: Dict[str, Dict[str, Union[str, dict]]]):
|
|
|
|
| 536 |
def analyze_text(self, text: str, selected_indices: List[str],
|
| 537 |
apply_log: bool = False, word_type_filter: Optional[str] = None,
|
| 538 |
log_transforms: Optional[Dict[str, List[str]]] = None,
|
| 539 |
+
selected_measures: Optional[Dict[str, List[str]]] = None,
|
| 540 |
+
separate_word_types: bool = False) -> Dict:
|
| 541 |
"""
|
| 542 |
Analyze text and return lexical sophistication scores.
|
| 543 |
|
|
|
|
| 552 |
selected_measures: Dict mapping index names to list of measures to compute
|
| 553 |
e.g., {'COCA_spoken_frequency_token': ['frequency', 'range']}
|
| 554 |
If None, computes all available measures for backward compatibility
|
| 555 |
+
separate_word_types: If True, process CW and FW separately in the same analysis call
|
| 556 |
|
| 557 |
Returns:
|
| 558 |
Dictionary containing analysis results
|
|
|
|
| 711 |
# Collect for summary statistics (score is already transformed if needed)
|
| 712 |
score = token_detail.get(index_name)
|
| 713 |
if score is not None:
|
| 714 |
+
# Handle different collection methods based on parameters
|
| 715 |
+
if separate_word_types or word_type_filter:
|
| 716 |
+
# Include word type in the key
|
| 717 |
+
all_scores[f"{index_name}_{word_type}"].append(score)
|
| 718 |
+
else:
|
| 719 |
+
# No word type suffix for unfiltered analysis
|
| 720 |
+
all_scores[index_name].append(score)
|
| 721 |
|
| 722 |
results['token_details'].append(token_detail)
|
| 723 |
|
|
|
|
| 929 |
|
| 930 |
return results
|
| 931 |
|
| 932 |
+
def analyze_batch_memory(self, file_contents: List[Tuple[str, str]], selected_indices: List[str],
|
| 933 |
+
apply_log: bool = False, word_type_filter: Optional[str] = None,
|
| 934 |
+
log_transforms: Optional[Dict[str, List[str]]] = None,
|
| 935 |
+
selected_measures: Optional[Dict[str, List[str]]] = None,
|
| 936 |
+
progress_callback=None) -> pd.DataFrame:
|
| 937 |
+
"""
|
| 938 |
+
Analyze multiple text files from memory and return aggregated results.
|
| 939 |
+
Optimized version that processes both CW and FW in a single pass.
|
| 940 |
+
|
| 941 |
+
Args:
|
| 942 |
+
file_contents: List of (filename, text_content) tuples
|
| 943 |
+
selected_indices: List of reference indices to apply
|
| 944 |
+
apply_log: Whether to apply log10 transformation (legacy parameter, superseded by log_transforms)
|
| 945 |
+
word_type_filter: Filter by word type ('CW', 'FW', or None for all)
|
| 946 |
+
log_transforms: Dict mapping index names to list of measures that should be log-transformed
|
| 947 |
+
selected_measures: Dict mapping index names to list of measures to compute
|
| 948 |
+
progress_callback: Optional callback for progress updates
|
| 949 |
+
|
| 950 |
+
Returns:
|
| 951 |
+
DataFrame with aggregated results
|
| 952 |
+
"""
|
| 953 |
+
batch_results = []
|
| 954 |
+
|
| 955 |
+
for i, (filename, text_content) in enumerate(file_contents):
|
| 956 |
+
try:
|
| 957 |
+
result_row = {'filename': filename}
|
| 958 |
+
|
| 959 |
+
if word_type_filter:
|
| 960 |
+
# Analyze only for specific word type
|
| 961 |
+
analysis = self.analyze_text(
|
| 962 |
+
text_content,
|
| 963 |
+
selected_indices,
|
| 964 |
+
apply_log=apply_log,
|
| 965 |
+
word_type_filter=word_type_filter,
|
| 966 |
+
log_transforms=log_transforms,
|
| 967 |
+
selected_measures=selected_measures
|
| 968 |
+
)
|
| 969 |
+
|
| 970 |
+
# Extract summary scores
|
| 971 |
+
for key, stats in analysis['summary'].items():
|
| 972 |
+
result_row[key] = stats['mean']
|
| 973 |
+
else:
|
| 974 |
+
# Single optimized analysis call that processes both CW and FW
|
| 975 |
+
analysis = self.analyze_text(
|
| 976 |
+
text_content,
|
| 977 |
+
selected_indices,
|
| 978 |
+
apply_log=apply_log,
|
| 979 |
+
word_type_filter=None,
|
| 980 |
+
log_transforms=log_transforms,
|
| 981 |
+
selected_measures=selected_measures,
|
| 982 |
+
separate_word_types=True # Process CW/FW separately in same pass
|
| 983 |
+
)
|
| 984 |
+
|
| 985 |
+
# Extract all summary scores including CW, FW, and n-grams
|
| 986 |
+
for key, stats in analysis['summary'].items():
|
| 987 |
+
result_row[key] = stats['mean']
|
| 988 |
+
|
| 989 |
+
batch_results.append(result_row)
|
| 990 |
+
|
| 991 |
+
if progress_callback:
|
| 992 |
+
progress_callback(i + 1, len(file_contents))
|
| 993 |
+
|
| 994 |
+
except Exception as e:
|
| 995 |
+
logger.error(f"Error processing file {filename}: {e}")
|
| 996 |
+
# Add error row
|
| 997 |
+
error_row = {'filename': filename, 'error': str(e)}
|
| 998 |
+
batch_results.append(error_row)
|
| 999 |
+
|
| 1000 |
+
if progress_callback:
|
| 1001 |
+
progress_callback(i + 1, len(file_contents))
|
| 1002 |
+
|
| 1003 |
+
return pd.DataFrame(batch_results)
|
| 1004 |
+
|
| 1005 |
def analyze_batch(self, file_paths: List[str], selected_indices: List[str],
|
| 1006 |
apply_log: bool = False, progress_callback=None) -> pd.DataFrame:
|
| 1007 |
"""
|
| 1008 |
+
Legacy batch analysis method for backward compatibility.
|
| 1009 |
Analyze multiple text files and return aggregated results.
|
| 1010 |
|
| 1011 |
Args:
|
|
|
|
| 1025 |
with open(file_path, 'r', encoding='utf-8') as f:
|
| 1026 |
text = f.read()
|
| 1027 |
|
| 1028 |
+
# Use optimized single-pass analysis
|
| 1029 |
result_row = {'filename': Path(file_path).name}
|
| 1030 |
|
| 1031 |
+
# Single optimized analysis call that processes both CW and FW
|
| 1032 |
+
analysis = self.analyze_text(
|
| 1033 |
+
text,
|
| 1034 |
+
selected_indices,
|
| 1035 |
+
apply_log=apply_log,
|
| 1036 |
+
word_type_filter=None,
|
| 1037 |
+
separate_word_types=True # Process CW/FW separately in same pass
|
| 1038 |
+
)
|
| 1039 |
|
| 1040 |
+
# Extract all summary scores
|
| 1041 |
+
for key, stats in analysis['summary'].items():
|
| 1042 |
+
result_row[key] = stats['mean']
|
|
|
|
|
|
|
| 1043 |
|
| 1044 |
batch_results.append(result_row)
|
| 1045 |
|
text_analyzer/pos_parser.py
CHANGED
|
@@ -27,15 +27,16 @@ class POSParser(BaseAnalyzer):
|
|
| 27 |
Inherits from BaseAnalyzer for consistent SpaCy model management.
|
| 28 |
"""
|
| 29 |
|
| 30 |
-
def __init__(self, language: str = "en", model_size: str = "trf"):
|
| 31 |
"""
|
| 32 |
Initialize parser with specified language and model.
|
| 33 |
|
| 34 |
Args:
|
| 35 |
language (str): Language code ('en' for English, 'ja' for Japanese)
|
| 36 |
model_size (str): SpaCy model size ('trf' or 'md')
|
|
|
|
| 37 |
"""
|
| 38 |
-
super().__init__(language, model_size)
|
| 39 |
|
| 40 |
def analyze_text(self, text: str) -> Dict:
|
| 41 |
"""
|
|
|
|
| 27 |
Inherits from BaseAnalyzer for consistent SpaCy model management.
|
| 28 |
"""
|
| 29 |
|
| 30 |
+
def __init__(self, language: str = "en", model_size: str = "trf", gpu_device: Optional[int] = None):
|
| 31 |
"""
|
| 32 |
Initialize parser with specified language and model.
|
| 33 |
|
| 34 |
Args:
|
| 35 |
language (str): Language code ('en' for English, 'ja' for Japanese)
|
| 36 |
model_size (str): SpaCy model size ('trf' or 'md')
|
| 37 |
+
gpu_device (int, optional): GPU device ID to use (None for auto-detect, -1 for CPU only)
|
| 38 |
"""
|
| 39 |
+
super().__init__(language, model_size, gpu_device)
|
| 40 |
|
| 41 |
def analyze_text(self, text: str) -> Dict:
|
| 42 |
"""
|
web_app/app.py
CHANGED
|
@@ -75,7 +75,7 @@ def render_sidebar():
|
|
| 75 |
debug_mode = st.checkbox("🐛 Debug Mode", key="debug_mode", help="Enable debug information for troubleshooting")
|
| 76 |
|
| 77 |
if debug_mode:
|
| 78 |
-
from web_app.debug_utils import show_environment_info, test_file_operations, debug_file_upload
|
| 79 |
|
| 80 |
with st.expander("Environment Info", expanded=False):
|
| 81 |
show_environment_info()
|
|
@@ -85,6 +85,9 @@ def render_sidebar():
|
|
| 85 |
|
| 86 |
with st.expander("File Upload Test", expanded=False):
|
| 87 |
debug_file_upload()
|
|
|
|
|
|
|
|
|
|
| 88 |
|
| 89 |
return tool_choice
|
| 90 |
|
|
|
|
| 75 |
debug_mode = st.checkbox("🐛 Debug Mode", key="debug_mode", help="Enable debug information for troubleshooting")
|
| 76 |
|
| 77 |
if debug_mode:
|
| 78 |
+
from web_app.debug_utils import show_environment_info, test_file_operations, debug_file_upload, show_gpu_status
|
| 79 |
|
| 80 |
with st.expander("Environment Info", expanded=False):
|
| 81 |
show_environment_info()
|
|
|
|
| 85 |
|
| 86 |
with st.expander("File Upload Test", expanded=False):
|
| 87 |
debug_file_upload()
|
| 88 |
+
|
| 89 |
+
with st.expander("GPU Status", expanded=False):
|
| 90 |
+
show_gpu_status()
|
| 91 |
|
| 92 |
return tool_choice
|
| 93 |
|
web_app/debug_utils.py
CHANGED
|
@@ -149,4 +149,120 @@ def debug_file_upload():
|
|
| 149 |
except Exception as e:
|
| 150 |
st.error(f"Error processing file: {e}")
|
| 151 |
import traceback
|
| 152 |
-
st.code(traceback.format_exc())
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 149 |
except Exception as e:
|
| 150 |
st.error(f"Error processing file: {e}")
|
| 151 |
import traceback
|
| 152 |
+
st.code(traceback.format_exc())
|
| 153 |
+
|
| 154 |
+
def show_gpu_status():
|
| 155 |
+
"""Display GPU/CUDA status information for debugging."""
|
| 156 |
+
st.write("### GPU Status Information")
|
| 157 |
+
|
| 158 |
+
# Check PyTorch/CUDA availability
|
| 159 |
+
st.write("**PyTorch/CUDA Status:**")
|
| 160 |
+
try:
|
| 161 |
+
import torch
|
| 162 |
+
|
| 163 |
+
col1, col2 = st.columns(2)
|
| 164 |
+
|
| 165 |
+
with col1:
|
| 166 |
+
st.write(f"- PyTorch version: {torch.__version__}")
|
| 167 |
+
|
| 168 |
+
if torch.cuda.is_available():
|
| 169 |
+
st.write(f"- CUDA available: ✅ Yes")
|
| 170 |
+
st.write(f"- CUDA version: {torch.version.cuda}")
|
| 171 |
+
st.write(f"- Number of GPUs: {torch.cuda.device_count()}")
|
| 172 |
+
|
| 173 |
+
# Show GPU details
|
| 174 |
+
for i in range(torch.cuda.device_count()):
|
| 175 |
+
st.write(f"\n**GPU {i}: {torch.cuda.get_device_name(i)}**")
|
| 176 |
+
memory_allocated = torch.cuda.memory_allocated(i) / 1024**3 # GB
|
| 177 |
+
memory_reserved = torch.cuda.memory_reserved(i) / 1024**3 # GB
|
| 178 |
+
memory_total = torch.cuda.get_device_properties(i).total_memory / 1024**3 # GB
|
| 179 |
+
st.write(f" - Total memory: {memory_total:.2f} GB")
|
| 180 |
+
st.write(f" - Allocated: {memory_allocated:.2f} GB")
|
| 181 |
+
st.write(f" - Reserved: {memory_reserved:.2f} GB")
|
| 182 |
+
st.write(f" - Free: {memory_total - memory_reserved:.2f} GB")
|
| 183 |
+
else:
|
| 184 |
+
st.write("- CUDA available: ❌ No")
|
| 185 |
+
st.write("- Running on: CPU only")
|
| 186 |
+
|
| 187 |
+
with col2:
|
| 188 |
+
# Check spaCy GPU configuration
|
| 189 |
+
st.write("**SpaCy GPU Configuration:**")
|
| 190 |
+
try:
|
| 191 |
+
import spacy
|
| 192 |
+
|
| 193 |
+
# Test GPU preference
|
| 194 |
+
gpu_id = spacy.prefer_gpu()
|
| 195 |
+
if gpu_id is not False:
|
| 196 |
+
st.write(f"- SpaCy GPU: ✅ Enabled (device {gpu_id})")
|
| 197 |
+
else:
|
| 198 |
+
st.write("- SpaCy GPU: ❌ Disabled")
|
| 199 |
+
|
| 200 |
+
# Check spacy-transformers
|
| 201 |
+
try:
|
| 202 |
+
import spacy_transformers
|
| 203 |
+
st.write("- spacy-transformers: ✅ Installed")
|
| 204 |
+
except ImportError:
|
| 205 |
+
st.write("- spacy-transformers: ❌ Not installed")
|
| 206 |
+
|
| 207 |
+
except Exception as e:
|
| 208 |
+
st.write(f"- SpaCy GPU check failed: {str(e)}")
|
| 209 |
+
|
| 210 |
+
except ImportError:
|
| 211 |
+
st.warning("PyTorch not installed - GPU support unavailable")
|
| 212 |
+
st.write("To enable GPU support, install PyTorch with CUDA support")
|
| 213 |
+
except Exception as e:
|
| 214 |
+
st.error(f"Error checking GPU status: {str(e)}")
|
| 215 |
+
|
| 216 |
+
# Active model GPU status
|
| 217 |
+
st.write("\n**Active Model GPU Status:**")
|
| 218 |
+
try:
|
| 219 |
+
# Try to get analyzer from session state
|
| 220 |
+
analyzer = None
|
| 221 |
+
if hasattr(st.session_state, 'analyzer') and st.session_state.analyzer:
|
| 222 |
+
analyzer = st.session_state.analyzer
|
| 223 |
+
elif hasattr(st.session_state, 'parser') and st.session_state.parser:
|
| 224 |
+
analyzer = st.session_state.parser
|
| 225 |
+
|
| 226 |
+
if analyzer:
|
| 227 |
+
model_info = analyzer.get_model_info()
|
| 228 |
+
col1, col2 = st.columns(2)
|
| 229 |
+
|
| 230 |
+
with col1:
|
| 231 |
+
st.write("**Current Model:**")
|
| 232 |
+
st.write(f"- Model: {model_info.get('name', 'N/A')}")
|
| 233 |
+
st.write(f"- Language: {model_info.get('language', 'N/A')}")
|
| 234 |
+
st.write(f"- Size: {model_info.get('model_size', 'N/A')}")
|
| 235 |
+
|
| 236 |
+
with col2:
|
| 237 |
+
st.write("**Device Configuration:**")
|
| 238 |
+
st.write(f"- Device: {model_info.get('device', 'N/A')}")
|
| 239 |
+
gpu_enabled = model_info.get('gpu_enabled', False)
|
| 240 |
+
st.write(f"- GPU Enabled: {'✅ Yes' if gpu_enabled else '❌ No'}")
|
| 241 |
+
st.write(f"- SpaCy version: {model_info.get('version', 'N/A')}")
|
| 242 |
+
|
| 243 |
+
# Show optimization status for transformer models
|
| 244 |
+
if model_info.get('model_size') == 'trf' and gpu_enabled:
|
| 245 |
+
st.write("\n**GPU Optimizations:**")
|
| 246 |
+
st.write("- Mixed precision: ✅ Enabled")
|
| 247 |
+
st.write("- Batch size: Optimized for GPU")
|
| 248 |
+
st.write("- Memory efficiency: Enhanced")
|
| 249 |
+
else:
|
| 250 |
+
st.info("No model currently loaded. Load a model to see its GPU configuration.")
|
| 251 |
+
|
| 252 |
+
except Exception as e:
|
| 253 |
+
st.write(f"Could not retrieve active model info: {str(e)}")
|
| 254 |
+
|
| 255 |
+
# Performance tips
|
| 256 |
+
with st.expander("💡 GPU Performance Tips", expanded=False):
|
| 257 |
+
st.write("""
|
| 258 |
+
**Optimization Tips:**
|
| 259 |
+
- Transformer models benefit most from GPU acceleration
|
| 260 |
+
- Batch processing is automatically optimized when GPU is enabled
|
| 261 |
+
- Mixed precision is enabled for transformer models on GPU
|
| 262 |
+
- GPU memory is managed automatically with fallback to CPU if needed
|
| 263 |
+
|
| 264 |
+
**Common Issues:**
|
| 265 |
+
- If GPU is not detected, ensure CUDA-compatible PyTorch is installed
|
| 266 |
+
- Memory errors: Try smaller batch sizes or use CPU for very large texts
|
| 267 |
+
- Performance: GPU shows most benefit with batch processing
|
| 268 |
+
""")
|
web_app/handlers/analysis_handlers.py
CHANGED
|
@@ -28,9 +28,15 @@ class AnalysisHandlers:
|
|
| 28 |
st.session_state.analyzer.model_size != st.session_state.model_size):
|
| 29 |
try:
|
| 30 |
from text_analyzer.lexical_sophistication import LexicalSophisticationAnalyzer
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 31 |
st.session_state.analyzer = LexicalSophisticationAnalyzer(
|
| 32 |
language=st.session_state.language,
|
| 33 |
-
model_size=st.session_state.model_size
|
|
|
|
| 34 |
)
|
| 35 |
except Exception as e:
|
| 36 |
st.error(f"Error loading analyzer: {e}")
|
|
@@ -45,9 +51,15 @@ class AnalysisHandlers:
|
|
| 45 |
st.session_state.pos_parser.model_size != st.session_state.model_size):
|
| 46 |
try:
|
| 47 |
from text_analyzer.pos_parser import POSParser
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 48 |
st.session_state.pos_parser = POSParser(
|
| 49 |
language=st.session_state.language,
|
| 50 |
-
model_size=st.session_state.model_size
|
|
|
|
| 51 |
)
|
| 52 |
except Exception as e:
|
| 53 |
st.error(f"Error loading POS parser: {e}")
|
|
@@ -178,8 +190,17 @@ class AnalysisHandlers:
|
|
| 178 |
ReferenceManager.configure_reference_lists(analyzer)
|
| 179 |
ReferenceManager.render_custom_upload_section()
|
| 180 |
|
| 181 |
-
#
|
| 182 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 183 |
|
| 184 |
# Analysis button
|
| 185 |
if st.button("Analyze Batch", type="primary"):
|
|
@@ -217,17 +238,100 @@ class AnalysisHandlers:
|
|
| 217 |
status_text.text(f"Processing file {i + 1}/{len(file_contents)}: {filename}")
|
| 218 |
|
| 219 |
try:
|
| 220 |
-
# Analyze for both content and function words
|
| 221 |
result_row = {'filename': filename}
|
| 222 |
|
| 223 |
-
|
| 224 |
-
|
| 225 |
-
|
| 226 |
-
|
| 227 |
-
|
| 228 |
-
|
| 229 |
-
|
| 230 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 231 |
|
| 232 |
batch_results.append(result_row)
|
| 233 |
except Exception as e:
|
|
@@ -279,17 +383,17 @@ class AnalysisHandlers:
|
|
| 279 |
ReferenceManager.configure_reference_lists(analyzer)
|
| 280 |
ReferenceManager.render_custom_upload_section()
|
| 281 |
|
| 282 |
-
#
|
| 283 |
-
|
| 284 |
-
|
| 285 |
-
|
| 286 |
-
|
| 287 |
-
|
| 288 |
-
|
| 289 |
-
|
| 290 |
-
|
| 291 |
-
|
| 292 |
-
|
| 293 |
|
| 294 |
# Analysis button
|
| 295 |
if st.button("🔍 Compare Texts", type="primary"):
|
|
@@ -306,8 +410,82 @@ class AnalysisHandlers:
|
|
| 306 |
# Perform analysis on both texts
|
| 307 |
selected_indices = list(reference_lists.keys())
|
| 308 |
|
| 309 |
-
|
| 310 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 311 |
|
| 312 |
# Display comparison results
|
| 313 |
display_comparison_results(results_a, results_b)
|
|
|
|
| 28 |
st.session_state.analyzer.model_size != st.session_state.model_size):
|
| 29 |
try:
|
| 30 |
from text_analyzer.lexical_sophistication import LexicalSophisticationAnalyzer
|
| 31 |
+
from text_analyzer.app_config import AppConfig
|
| 32 |
+
|
| 33 |
+
# Get GPU configuration from AppConfig
|
| 34 |
+
gpu_device = AppConfig.GPU_DEVICE if AppConfig.USE_GPU_IF_AVAILABLE else -1
|
| 35 |
+
|
| 36 |
st.session_state.analyzer = LexicalSophisticationAnalyzer(
|
| 37 |
language=st.session_state.language,
|
| 38 |
+
model_size=st.session_state.model_size,
|
| 39 |
+
gpu_device=gpu_device
|
| 40 |
)
|
| 41 |
except Exception as e:
|
| 42 |
st.error(f"Error loading analyzer: {e}")
|
|
|
|
| 51 |
st.session_state.pos_parser.model_size != st.session_state.model_size):
|
| 52 |
try:
|
| 53 |
from text_analyzer.pos_parser import POSParser
|
| 54 |
+
from text_analyzer.app_config import AppConfig
|
| 55 |
+
|
| 56 |
+
# Get GPU configuration from AppConfig
|
| 57 |
+
gpu_device = AppConfig.GPU_DEVICE if AppConfig.USE_GPU_IF_AVAILABLE else -1
|
| 58 |
+
|
| 59 |
st.session_state.pos_parser = POSParser(
|
| 60 |
language=st.session_state.language,
|
| 61 |
+
model_size=st.session_state.model_size,
|
| 62 |
+
gpu_device=gpu_device
|
| 63 |
)
|
| 64 |
except Exception as e:
|
| 65 |
st.error(f"Error loading POS parser: {e}")
|
|
|
|
| 190 |
ReferenceManager.configure_reference_lists(analyzer)
|
| 191 |
ReferenceManager.render_custom_upload_section()
|
| 192 |
|
| 193 |
+
# Enhanced analysis options with smart defaults - same as single text
|
| 194 |
+
analysis_config = AnalysisHandlers.render_enhanced_analysis_options()
|
| 195 |
+
|
| 196 |
+
# Extract configuration
|
| 197 |
+
token_analysis = analysis_config['token_analysis']
|
| 198 |
+
lemma_analysis = analysis_config['lemma_analysis']
|
| 199 |
+
word_type_filter = analysis_config['word_type_filter']
|
| 200 |
+
use_smart_defaults = analysis_config['use_smart_defaults']
|
| 201 |
+
legacy_log_transform = analysis_config.get('legacy_log_transform', False)
|
| 202 |
+
selected_measures = analysis_config.get('selected_measures', {})
|
| 203 |
+
log_transforms = analysis_config.get('log_transforms', {})
|
| 204 |
|
| 205 |
# Analysis button
|
| 206 |
if st.button("Analyze Batch", type="primary"):
|
|
|
|
| 238 |
status_text.text(f"Processing file {i + 1}/{len(file_contents)}: {filename}")
|
| 239 |
|
| 240 |
try:
|
|
|
|
| 241 |
result_row = {'filename': filename}
|
| 242 |
|
| 243 |
+
# Use enhanced analysis with measure selection
|
| 244 |
+
if use_smart_defaults and not legacy_log_transform:
|
| 245 |
+
# Use custom selections or smart defaults
|
| 246 |
+
if selected_measures and any(selected_measures.values()):
|
| 247 |
+
# User has made custom selections
|
| 248 |
+
if word_type_filter:
|
| 249 |
+
# Analyze only for specific word type
|
| 250 |
+
analysis = analyzer.analyze_text(
|
| 251 |
+
text_content,
|
| 252 |
+
selected_indices,
|
| 253 |
+
apply_log=False,
|
| 254 |
+
word_type_filter=word_type_filter,
|
| 255 |
+
log_transforms=log_transforms,
|
| 256 |
+
selected_measures=selected_measures
|
| 257 |
+
)
|
| 258 |
+
|
| 259 |
+
# Extract summary scores
|
| 260 |
+
if analysis and 'summary' in analysis:
|
| 261 |
+
for key, stats in analysis['summary'].items():
|
| 262 |
+
result_row[key] = stats['mean']
|
| 263 |
+
else:
|
| 264 |
+
# Single optimized analysis call that processes both CW and FW
|
| 265 |
+
analysis = analyzer.analyze_text(
|
| 266 |
+
text_content,
|
| 267 |
+
selected_indices,
|
| 268 |
+
apply_log=False,
|
| 269 |
+
word_type_filter=None,
|
| 270 |
+
log_transforms=log_transforms,
|
| 271 |
+
selected_measures=selected_measures,
|
| 272 |
+
separate_word_types=True # New flag to collect CW/FW separately
|
| 273 |
+
)
|
| 274 |
+
|
| 275 |
+
# Extract all summary scores including CW, FW, and n-grams
|
| 276 |
+
if analysis and 'summary' in analysis:
|
| 277 |
+
for key, stats in analysis['summary'].items():
|
| 278 |
+
result_row[key] = stats['mean']
|
| 279 |
+
else:
|
| 280 |
+
# Fallback to smart defaults
|
| 281 |
+
from web_app.defaults_manager import DefaultsManager
|
| 282 |
+
from web_app.config_manager import ConfigManager
|
| 283 |
+
|
| 284 |
+
config = ConfigManager.load_reference_config()
|
| 285 |
+
default_measures, default_logs = DefaultsManager.get_default_analysis_config(
|
| 286 |
+
selected_indices, config
|
| 287 |
+
)
|
| 288 |
+
|
| 289 |
+
if word_type_filter:
|
| 290 |
+
# Analyze only for specific word type
|
| 291 |
+
analysis = analyzer.analyze_text(
|
| 292 |
+
text_content,
|
| 293 |
+
selected_indices,
|
| 294 |
+
apply_log=False,
|
| 295 |
+
word_type_filter=word_type_filter,
|
| 296 |
+
log_transforms=default_logs,
|
| 297 |
+
selected_measures=default_measures
|
| 298 |
+
)
|
| 299 |
+
|
| 300 |
+
# Extract summary scores
|
| 301 |
+
if analysis and 'summary' in analysis:
|
| 302 |
+
for key, stats in analysis['summary'].items():
|
| 303 |
+
result_row[key] = stats['mean']
|
| 304 |
+
else:
|
| 305 |
+
# Single optimized analysis call that processes both CW and FW
|
| 306 |
+
analysis = analyzer.analyze_text(
|
| 307 |
+
text_content,
|
| 308 |
+
selected_indices,
|
| 309 |
+
apply_log=False,
|
| 310 |
+
word_type_filter=None,
|
| 311 |
+
log_transforms=default_logs,
|
| 312 |
+
selected_measures=default_measures,
|
| 313 |
+
separate_word_types=True # New flag to collect CW/FW separately
|
| 314 |
+
)
|
| 315 |
+
|
| 316 |
+
# Extract all summary scores including CW, FW, and n-grams
|
| 317 |
+
if analysis and 'summary' in analysis:
|
| 318 |
+
for key, stats in analysis['summary'].items():
|
| 319 |
+
result_row[key] = stats['mean']
|
| 320 |
+
else:
|
| 321 |
+
# Legacy mode - use global log transformation
|
| 322 |
+
for word_type in ['CW', 'FW']:
|
| 323 |
+
analysis = analyzer.analyze_text(
|
| 324 |
+
text_content,
|
| 325 |
+
selected_indices,
|
| 326 |
+
legacy_log_transform,
|
| 327 |
+
word_type
|
| 328 |
+
)
|
| 329 |
+
|
| 330 |
+
# Extract summary scores
|
| 331 |
+
if analysis and 'summary' in analysis:
|
| 332 |
+
for key, stats in analysis['summary'].items():
|
| 333 |
+
if word_type in key:
|
| 334 |
+
result_row[key] = stats['mean']
|
| 335 |
|
| 336 |
batch_results.append(result_row)
|
| 337 |
except Exception as e:
|
|
|
|
| 383 |
ReferenceManager.configure_reference_lists(analyzer)
|
| 384 |
ReferenceManager.render_custom_upload_section()
|
| 385 |
|
| 386 |
+
# Enhanced analysis options with smart defaults - same as single text
|
| 387 |
+
analysis_config = AnalysisHandlers.render_enhanced_analysis_options()
|
| 388 |
+
|
| 389 |
+
# Extract configuration
|
| 390 |
+
token_analysis = analysis_config['token_analysis']
|
| 391 |
+
lemma_analysis = analysis_config['lemma_analysis']
|
| 392 |
+
word_type_filter = analysis_config['word_type_filter']
|
| 393 |
+
use_smart_defaults = analysis_config['use_smart_defaults']
|
| 394 |
+
legacy_log_transform = analysis_config.get('legacy_log_transform', False)
|
| 395 |
+
selected_measures = analysis_config.get('selected_measures', {})
|
| 396 |
+
log_transforms = analysis_config.get('log_transforms', {})
|
| 397 |
|
| 398 |
# Analysis button
|
| 399 |
if st.button("🔍 Compare Texts", type="primary"):
|
|
|
|
| 410 |
# Perform analysis on both texts
|
| 411 |
selected_indices = list(reference_lists.keys())
|
| 412 |
|
| 413 |
+
# Get analysis configuration
|
| 414 |
+
if use_smart_defaults and not legacy_log_transform:
|
| 415 |
+
# Use custom selections from the enhanced UI
|
| 416 |
+
if selected_measures and any(selected_measures.values()):
|
| 417 |
+
# User has made custom selections
|
| 418 |
+
results_a = analyzer.analyze_text(
|
| 419 |
+
text_a,
|
| 420 |
+
selected_indices,
|
| 421 |
+
apply_log=False, # Superseded by log_transforms
|
| 422 |
+
word_type_filter=word_type_filter,
|
| 423 |
+
log_transforms=log_transforms,
|
| 424 |
+
selected_measures=selected_measures
|
| 425 |
+
)
|
| 426 |
+
results_b = analyzer.analyze_text(
|
| 427 |
+
text_b,
|
| 428 |
+
selected_indices,
|
| 429 |
+
apply_log=False, # Superseded by log_transforms
|
| 430 |
+
word_type_filter=word_type_filter,
|
| 431 |
+
log_transforms=log_transforms,
|
| 432 |
+
selected_measures=selected_measures
|
| 433 |
+
)
|
| 434 |
+
|
| 435 |
+
# Calculate totals for user feedback
|
| 436 |
+
total_measures = sum(len(measures) for measures in selected_measures.values())
|
| 437 |
+
total_logs = sum(len(logs) for logs in log_transforms.values())
|
| 438 |
+
|
| 439 |
+
st.success("✨ Comparison completed using your custom selections!")
|
| 440 |
+
st.info(f"📊 Analyzed {total_measures} measures, {total_logs} log-transformed")
|
| 441 |
+
else:
|
| 442 |
+
# Fallback to smart defaults if no custom selections
|
| 443 |
+
from web_app.defaults_manager import DefaultsManager
|
| 444 |
+
from web_app.config_manager import ConfigManager
|
| 445 |
+
|
| 446 |
+
config = ConfigManager.load_reference_config()
|
| 447 |
+
default_measures, default_logs = DefaultsManager.get_default_analysis_config(
|
| 448 |
+
selected_indices, config
|
| 449 |
+
)
|
| 450 |
+
|
| 451 |
+
results_a = analyzer.analyze_text(
|
| 452 |
+
text_a,
|
| 453 |
+
selected_indices,
|
| 454 |
+
apply_log=False,
|
| 455 |
+
word_type_filter=word_type_filter,
|
| 456 |
+
log_transforms=default_logs,
|
| 457 |
+
selected_measures=default_measures
|
| 458 |
+
)
|
| 459 |
+
results_b = analyzer.analyze_text(
|
| 460 |
+
text_b,
|
| 461 |
+
selected_indices,
|
| 462 |
+
apply_log=False,
|
| 463 |
+
word_type_filter=word_type_filter,
|
| 464 |
+
log_transforms=default_logs,
|
| 465 |
+
selected_measures=default_measures
|
| 466 |
+
)
|
| 467 |
+
|
| 468 |
+
total_logs = sum(len(logs) for logs in default_logs.values())
|
| 469 |
+
st.success("✨ Comparison completed using Smart Defaults!")
|
| 470 |
+
st.info(f"📊 Applied selective log transforms to {total_logs} measures")
|
| 471 |
+
|
| 472 |
+
else:
|
| 473 |
+
# Legacy mode - use global log transformation
|
| 474 |
+
results_a = analyzer.analyze_text(
|
| 475 |
+
text_a,
|
| 476 |
+
selected_indices,
|
| 477 |
+
apply_log=legacy_log_transform,
|
| 478 |
+
word_type_filter=word_type_filter
|
| 479 |
+
)
|
| 480 |
+
results_b = analyzer.analyze_text(
|
| 481 |
+
text_b,
|
| 482 |
+
selected_indices,
|
| 483 |
+
apply_log=legacy_log_transform,
|
| 484 |
+
word_type_filter=word_type_filter
|
| 485 |
+
)
|
| 486 |
+
|
| 487 |
+
if legacy_log_transform:
|
| 488 |
+
st.warning("⚠️ Legacy mode: Log transformation applied to ALL measures")
|
| 489 |
|
| 490 |
# Display comparison results
|
| 491 |
display_comparison_results(results_a, results_b)
|