Awni Hannun
AI & ML interests
None yet
Recent Activity
upvoted an article 20 days ago
The PR you would have opened yourself published an article 20 days ago
The PR you would have opened yourself updated a model 3 months ago
mlx-community/MiniMax-M2.5-4bitOrganizations
Missing tie_word_embeddings in config.json causes incorrect weight tying
#1 opened 3 months ago
by
louis-jan
Transformers v5 support
9
#15 opened 4 months ago
by
AntonV
Update tokenizer_config.json
🚀 2
1
#14 opened 4 months ago
by
awni
Update tokenizer_config.json
1
#3 opened 4 months ago
by
ArthurZ
How to get such good quality as this quant? (For translations)
24
#3 opened 9 months ago
by
bibproj
Fail early if model requires `trust_remote_code`
5
#63 opened 9 months ago
by
pcuenq
Model size
➕ 1
2
#1 opened 9 months ago
by
depasquale
Protocol for experiments
👍 3
#1 opened 9 months ago
by
awni
Update README.md
#4 opened 10 months ago
by
awni
Tokenizer's `model_max_length` unusual value
2
#48 opened about 1 year ago
by
awni
eos token id mismatch
1
#12 opened 10 months ago
by
awni
tokenizer_config.json is not correct
12
#1 opened 11 months ago
by
depasquale
Token generation speed is very slow
4
#4 opened 12 months ago
by
huynguyendbs
Model not found..?
2
#1 opened 12 months ago
by
balnazzar
bfloat16-conversion
#1 opened about 1 year ago
by
neilmehta24
Error when converting huihui-ai/Llama-3.2-3B-Instruct-abliterated: Received parameters not in model: lm_head.weight.
5
#36 opened about 1 year ago
by
Felladrin
Error converting microsoft/Phi-4-mini-instruct: Shapes (48) and (64) cannot be broadcast.
4
#35 opened about 1 year ago
by
Felladrin
VRAM Requirements for Running the Model
3
#1 opened over 1 year ago
by
wilfoderek
3bit-bf16
4
#1 opened over 1 year ago
by
ehartford
Upload folder using huggingface_hub
2
#1 opened over 1 year ago
by
awni