What are the hardware resources and requirements to host ALLaM-Instruct-preview-7B model?
I'm looking to deploy and host the ALLaM-Instruct-preview-7B model and would appreciate guidance on the hardware requirements needed to run it effectively.
Could anyone share the recommended hardware resources, such as:
GPU: What is the minimum GPU VRAM required to run inference?
System RAM: How much RAM is needed to run the model efficiently?
Disk Space: How much storage space is necessary for model files and dependencies?
Other Requirements: Any additional hardware specs or optimizations?
Additionally, if anyone has experience running this model, I would love to hear about the setup and challenges faced during deployment.
Thanks in advance for your help!
for bare minimum usage so using a 4-bit or 8-bit quantized version I would say:
4β6 GB VRAM (e.g., RTX 2060/3050 or similar)
for (BF16/FP16):
14β16 GB VRAM (e.g., RTX 3090, 4090, A4000)
RAM
Minimum: 16 GB it would work on 11 GB (from my experiments)
Recommended: 32 GB
Disk Space
For storage, you mainly need space for the weights + environment:
Model Weights:
The weights and evaluation files located at: HuggingFace link
Typically require:
~14β16 GB for FP16 weights
~4β7 GB for quantized (4-bit/8-bit) weights
Environment & Dependencies:
PyTorch, Transformers, CUDA libs, runtime tools = ~5β8 GB
Total Recommended Disk Space:
20β25 GB (for FP16)
10β15 GB (for quantized)
Other Notes
use quantized versions and see if it fit your usecase, you can try to use Ollama also