Fix model discovery: skip wildcard '*' model IDs from LiteLLM proxy 9c3ced0 Somuai12 commited on Apr 7
Fix MODEL_NAME=None: auto-discover from proxy /models endpoint, fallback to gpt-4o-mini 79fb14b Somuai12 commited on Apr 7
Critical Fix: Align internal port to 8000 to satisfy OpenEnv library requirements 47a298a Somuai12 commited on Apr 7
Compliance Fix: Resolver setup timeout with lazy Gradio and extended 120s wait 6a19dc6 Somuai12 commited on Apr 7
Fix proxy test: exit with 1 on API failure so validator sees the error; fallback to HF_TOKEN if API_KEY is empty 899c12a Somuai12 commited on Apr 7
Compliance Hardening: Remove silent fallbacks to force proxy usage 292424c Somuai12 commited on Apr 7
Compliance fix: strictly use API_KEY and API_BASE_URL to avoid proxy bypass 09a9c72 Somuai12 commited on Apr 7
Fix structured output: ensure logging always runs and format matches validator 9cdb062 Somuai12 commited on Apr 7
Fix inference.py: async OpenEnv pattern, from_docker_image, proper error handling 4c68ece Somuai12 commited on Apr 7
final: comprehensive 0.9+ strategic agent upgrades and infrastructure refactor 933baa6 Somuai12 commited on Apr 5
hackathon: final submission candidate (removes binary image for HF compatibility) 6aa8acb Somuai12 commited on Apr 3
fix: Synchronize inference port to 7860 for standardized local testing 0dfef67 Somuai12 commited on Mar 31
Revert default LLM back to Llama-3.3-70B-Instruct for optimal general scoring 4813de2 Somuai12 commited on Mar 29