Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
Intel
/
phi-2-int4-inc
like
4
Follow
Intel
3.46k
Text Generation
Transformers
Safetensors
NeelNanda/pile-10k
phi
text-generation-inference
4-bit precision
intel/auto-round
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
b7b2aed
phi-2-int4-inc
1.84 GB
4 contributors
History:
17 commits
wenhuach
Update README.md
b7b2aed
verified
over 1 year ago
.gitattributes
1.52 kB
initial commit
almost 2 years ago
README.md
3.47 kB
Update README.md
over 1 year ago
added_tokens.json
1.08 kB
first commit
almost 2 years ago
config.json
1.25 kB
replace with sym quantization as there is a large accuracy drop due to kernel issue
over 1 year ago
configuration_phi.py
2.03 kB
first commit
almost 2 years ago
merges.txt
456 kB
first commit
almost 2 years ago
model.safetensors
1.84 GB
xet
replace with sym quantization as there is a large accuracy drop due to kernel issue
over 1 year ago
modeling_phi.py
33.4 kB
first commit
almost 2 years ago
quantize_config.json
464 Bytes
replace with sym quantization as there is a large accuracy drop due to kernel issue
over 1 year ago
special_tokens_map.json
441 Bytes
first commit
almost 2 years ago
tokenizer.json
2.11 MB
replace with sym quantization as there is a large accuracy drop due to kernel issue
over 1 year ago
tokenizer_config.json
7.37 kB
replace with sym quantization as there is a large accuracy drop due to kernel issue
over 1 year ago
vocab.json
798 kB
first commit
almost 2 years ago