·
AI & ML interests
None yet
Organizations
jnjj/gemma-3-1b-it-qat-int4-quantized-weights-only-sf
Updated
jnjj/gemma-3-1b-it-qat-int4-quantized-functional
1B • Updated jnjj/gemma-3-1b-it-qat-int4-quantized-inference-extreme-shrinkage-sf
Updated
jnjj/unsloth_xd-Q2_K-GGUF
0.7B • Updated jnjj/gemma-3-1b-it-qat-int4-quantized-inference-extreme-shrinkage-sf-Q2_K-GGUF
0.7B • Updated jnjj/gemma-3-1b-it-qat-int4-quantized-inference-unrestricted-pruned-weights-only-sf
Updated
jnjj/gemma-3-1b-it-qat-int4-quantized-inference-unrestricted-pruned-addquant-sf
Updated
jnjj/gemma-3-1b-it-qat-int4-quantized-inference-unrestricted-pruned-sf
Text Generation
• Updated jnjj/gemma-3-4b-it-qat-int4-quantized-inference-unrestricted-pruned-sf
Image-Text-to-Text
• Updated jnjj/gemma-3-4b-it-qat-int4-quantized-inference-unrestricted-weights-only-sf
Image-Text-to-Text
• Updated jnjj/gemma-3-4b-it-qat-int4-quantized-inference-unrestricted-weights-only-sf-Q2_K-GGUF
2B • Updated • 124
jnjj/gemma-3-1b-it-qat-int4-quantized-inference-unrestricted-weights-only-sf
Text Generation
• Updated jnjj/gemma-3-1b-it-qat-int4-quantized-inference-unrestricted
Text Generation
• 1B • Updated • 2
jnjj/gemma-3-1b-it-qat-int4-quantized-inference
Text Generation
• 1B • Updated • 1
jnjj/txtinstruct_full_cpu
Updated
Text Generation
• 13M • Updated • 1
jnjj/txtinstruct_full_gpu
Updated
jnjj/pure-torch-transformer-cpu-20250420_212430
Updated
jnjj/one-layer-gpt2-colab-v5-20250420_213331
Text Generation
• 1.49M • Updated jnjj/one-layer-gpt2-colab-v5-20250420_211849
Updated
jnjj/pure-torch-transformer-cpu
Updated
jnjj/one-layer-gpt2-colab-v5
Updated
Text Generation
• 1.49M • Updated jnjj/one-layer-gpt2-colab-v5-20250420_205253
Updated
jnjj/pure-torch-transformer-cpu-20250420_204716
Updated
jnjj/one-layer-gpt2-colab-v5-20250420_160926
Updated
jnjj/one-layer-gpt2-colab-v5-20250420_155111
Updated
jnjj/one-layer-gpt2-colab-v2-20250420_072746
Updated
jnjj/your_model_repo_1layer_pro_v2
Text Generation
• 13M • Updated