Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
3
Austin Vance
austinbv
Follow
maxrubin629's profile picture
skaygekko's profile picture
2 followers
ยท
3 following
https://focused.io
austinbv
austinbv
AI & ML interests
None yet
Recent Activity
new
activity
19 days ago
mlx-community/deepseek-ai-DeepSeek-V4-Flash-4bit:
Is it possible to convert to a 2-bit quantized version?
published
a model
20 days ago
mlx-community/deepseek-ai-DeepSeek-V4-Flash-2bit
updated
a model
20 days ago
mlx-community/deepseek-ai-DeepSeek-V4-Flash-3bit
View all activity
Organizations
austinbv
's activity
All
Models
Datasets
Spaces
Buckets
Papers
Collections
Community
Posts
Upvotes
Likes
Articles
New activity in
mlx-community/deepseek-ai-DeepSeek-V4-Flash-4bit
19 days ago
Is it possible to convert to a 2-bit quantized version?
1
#1 opened 20 days ago by
hehua2008
New activity in
mlx-community/CodeLlama-70b-Instruct-hf-4bit-MLX
about 1 year ago
Conversion request to Q5_K_M for MLX
6
#4 opened about 1 year ago by
websprockets
New activity in
mlx-community/meta-llama-Llama-4-Scout-17B-16E-4bit
about 1 year ago
ValueError: Model type llama4 not supported.
4
#1 opened about 1 year ago by
jtdavies
New activity in
mlx-community/CodeLlama-70b-Instruct-hf-4bit-MLX
about 1 year ago
Conversion request to Q5_K_M for MLX
6
#4 opened about 1 year ago by
websprockets
Conversion request to Q5_K_M for MLX
6
#4 opened about 1 year ago by
websprockets