Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
tatsuyaaaaaaa
/
granite-4.0-micro-base-gguf
like
0
GGUF
TFMC/imatrix-dataset-for-japanese-llm
Japanese
English
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
ibm-graniteの
granite-4.0-micro-base
をgguf変換したものです。
Downloads last month
8
GGUF
Model size
3B params
Architecture
granitehybrid
Hardware compatibility
Log In
to add your hardware
4-bit
Q4_K_S
2 GB
Q4_0
1.98 GB
Q4_K_M
2.1 GB
5-bit
Q5_K_S
2.38 GB
Q5_0
2.38 GB
Q5_1
2.57 GB
Q5_K_M
2.44 GB
6-bit
Q6_K
2.8 GB
8-bit
Q8_0
3.62 GB
16-bit
BF16
6.81 GB
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for
tatsuyaaaaaaa/granite-4.0-micro-base-gguf
Base model
ibm-granite/granite-4.0-micro-base
Quantized
(
7
)
this model
Dataset used to train
tatsuyaaaaaaa/granite-4.0-micro-base-gguf
TFMC/imatrix-dataset-for-japanese-llm
Viewer
•
Updated
Apr 19, 2024
•
239
•
209
•
34
Collection including
tatsuyaaaaaaa/granite-4.0-micro-base-gguf
IBM
Collection
IBMに関連するコレクション
•
5 items
•
Updated
Nov 3, 2025