Optimize translation: reduce max_tokens and context_size, add no-display-prompt flag b45369f zazaman commited on Nov 9
Replace llama-cpp-python with pre-built llama.cpp binary for Qwen translator c26a471 zazaman commited on Nov 9
Add multilingual translation support with Qwen3-0.6B-GGUF and optimize for Hugging Face Spaces deployment a2e1879 zazaman commited on Nov 9