Joseph717171 commited on
Commit
9b255c7
·
verified ·
1 Parent(s): dab4642

Upload gemma-3-12B-it-qat-Linux_CUDA_NGL_KV_F32-unquantized-F32.imatrix with huggingface_hub

Browse files
.gitattributes CHANGED
@@ -156,3 +156,4 @@ Qwen3-8B-Linux_CUDA_NGL_KV_F32-F32.imatrix filter=lfs diff=lfs merge=lfs -text
156
  Qwen3-4B-Linux_CUDA_NGL_KV_F32-F32.imatrix filter=lfs diff=lfs merge=lfs -text
157
  Qwen3-1.7B-Linux_CUDA_NGL_KV_F32-F32.imatrix filter=lfs diff=lfs merge=lfs -text
158
  Qwen3-0.6B-Linux_CUDA_NGL_KV_F32-F32.imatrix filter=lfs diff=lfs merge=lfs -text
 
 
156
  Qwen3-4B-Linux_CUDA_NGL_KV_F32-F32.imatrix filter=lfs diff=lfs merge=lfs -text
157
  Qwen3-1.7B-Linux_CUDA_NGL_KV_F32-F32.imatrix filter=lfs diff=lfs merge=lfs -text
158
  Qwen3-0.6B-Linux_CUDA_NGL_KV_F32-F32.imatrix filter=lfs diff=lfs merge=lfs -text
159
+ gemma-3-12B-it-qat-Linux_CUDA_NGL_KV_F32-unquantized-F32.imatrix filter=lfs diff=lfs merge=lfs -text
gemma-3-12B-it-qat-Linux_CUDA_NGL_KV_F32-unquantized-F32.imatrix ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7ceee55e16f2a4436d253f006a14456e747f8a621a7f60a8e3c5a69d8e18cd18
3
+ size 7433125