Filter out reasoning tokens and extract actual translation from Qwen output 08546b6 zazaman commited on Nov 9, 2025
Optimize translation: reduce max_tokens and context_size, add no-display-prompt flag b45369f zazaman commited on Nov 9, 2025
Compile llama.cpp in Dockerfile for architecture compatibility b880cfc zazaman commited on Nov 9, 2025
Add automatic architecture detection and binary selection for llama.cpp d2cecb3 zazaman commited on Nov 9, 2025
Add binary architecture compatibility check and better error handling e7a0c9a zazaman commited on Nov 9, 2025
Add comprehensive logging with flush for translation debugging 1ff012c zazaman commited on Nov 9, 2025
Fix translation: add OS detection, better error handling and logging 1af1f14 zazaman commited on Nov 9, 2025
Replace llama-cpp-python with pre-built llama.cpp binary for Qwen translator c26a471 zazaman commited on Nov 9, 2025
Add multilingual translation support with Qwen3-0.6B-GGUF and optimize for Hugging Face Spaces deployment a2e1879 zazaman commited on Nov 9, 2025