guardrails-final / llm_clients /qwen_translator.py

Commit History

Simplify translation prompt to reduce reasoning time
31aee82

zazaman commited on

Move re import to top of file
bb5b92d

zazaman commited on

Filter out reasoning tokens and extract actual translation from Qwen output
08546b6

zazaman commited on

Fix timeout error message to match actual timeout
cee104f

zazaman commited on

Optimize translation: reduce max_tokens and context_size, add no-display-prompt flag
b45369f

zazaman commited on

Remove --stop argument (not supported in llama.cpp CLI)
bb181f0

zazaman commited on

Compile llama.cpp in Dockerfile for architecture compatibility
b880cfc

zazaman commited on

Add automatic architecture detection and binary selection for llama.cpp
d2cecb3

zazaman commited on

Add binary architecture compatibility check and better error handling
e7a0c9a

zazaman commited on

Add comprehensive logging with flush for translation debugging
1ff012c

zazaman commited on

Fix translation: add OS detection, better error handling and logging
1af1f14

zazaman commited on

Replace llama-cpp-python with pre-built llama.cpp binary for Qwen translator
c26a471

zazaman commited on

Add multilingual translation support with Qwen3-0.6B-GGUF and optimize for Hugging Face Spaces deployment
a2e1879

zazaman commited on