Eval Request: amkkk/qwen3.5-0.8b-abliterated-alllayers

#582
by amkkk - opened

Hi! I’d like to request evaluation for this model on the UGI leaderboard.

Important loading note:

  • Requires trust_remote_code=True (custom modeling file in repo)

Recommended decoding settings:

  • max_new_tokens=384
  • min_new_tokens=0
  • repetition_penalty=1.10
  • no_repeat_ngram_size=4
  • ablation_strength= 0.2

Notes:

  • The repo includes generation_config.json
  • The repo includes inference_serving.py / run_local.bat showing intended local serving behavior
  • If default harness decoding is used, early EOS/short outputs may occur more frequently

Thanks for your work on the leaderboard!

Getting the error:
"Value error, Model architectures ['AbliteratedQwen3_5ForCausalLM'] are not supported for now."

Sign up or log in to comment