Add model card
b0665de verified - 1.62 kB Autologic SLM 0.5B - Qwen2.5 LoRA fine-tuned, ONNX fp32 with KV cache
- 1.53 kB Add model card
- 605 Bytes Autologic SLM 0.5B - Qwen2.5 LoRA fine-tuned, ONNX fp32 with KV cache
- 2.43 kB Autologic SLM 0.5B - Qwen2.5 LoRA fine-tuned, ONNX fp32 with KV cache
- 696 Bytes Fix config: use_cache=true, remove layer_types
- 117 Bytes Autologic SLM 0.5B - Qwen2.5 LoRA fine-tuned, ONNX fp32 with KV cache
- 1.67 MB Autologic SLM 0.5B - Qwen2.5 LoRA fine-tuned, ONNX fp32 with KV cache
- 1.13 MB Autologic SLM 0.5B - Qwen2.5 LoRA fine-tuned, ONNX fp32 with KV cache
- 2.52 GB Autologic SLM 0.5B - Qwen2.5 LoRA fine-tuned, ONNX fp32 with KV cache
- 616 Bytes Autologic SLM 0.5B - Qwen2.5 LoRA fine-tuned, ONNX fp32 with KV cache
- 8.36 MB Fix tokenizer merges format for Transformers.js
- 4.69 kB Autologic SLM 0.5B - Qwen2.5 LoRA fine-tuned, ONNX fp32 with KV cache
- 2.78 MB Autologic SLM 0.5B - Qwen2.5 LoRA fine-tuned, ONNX fp32 with KV cache