Filter out reasoning tokens and extract actual translation from Qwen output 08546b6 zazaman commited on Nov 9
Optimize translation: reduce max_tokens and context_size, add no-display-prompt flag b45369f zazaman commited on Nov 9
Add automatic architecture detection and binary selection for llama.cpp d2cecb3 zazaman commited on Nov 9
Add binary architecture compatibility check and better error handling e7a0c9a zazaman commited on Nov 9
Fix translation: add OS detection, better error handling and logging 1af1f14 zazaman commited on Nov 9
Replace llama-cpp-python with pre-built llama.cpp binary for Qwen translator c26a471 zazaman commited on Nov 9
Add multilingual translation support with Qwen3-0.6B-GGUF and optimize for Hugging Face Spaces deployment a2e1879 zazaman commited on Nov 9