Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
tzervas
/
phi-4-bitnet-1.58b
like
0
Text Generation
Safetensors
GGUF
English
phi3
bitnet
quantization
ternary
1.58-bit
phi-4
experimental
14b-architecture
conversational
8-bit precision
License:
mit
Model card
Files
Files and versions
xet
Community
1
Deploy
Use this model
New discussion
New pull request
Resources
PR & discussions documentation
Code of Conduct
Hub documentation
All
Discussions
Pull requests
View closed (0)
Sort: Recently created
Is it work to quantize an existing LLM into 1.58bit?
#1 opened 3 days ago by
SilverJim