Abhinav Kulkarni
abhinavkulkarni
AI & ML interests
NLP
Organizations
None yet
Multi token prediction not working
1
#6 opened 3 months ago
by
abhinavkulkarni
Were the original PDFs saved?
11
#2 opened 5 months ago
by
staghado
Enabling or disabling reasoning on-demand with remote vLLM server?
1
#15 opened 9 months ago
by
abhinavkulkarni
Adding `safetensors` variant of this model
#1 opened over 1 year ago
by
SFconvertbot
llava_eval_image_embed : failed to eval
5
#6 opened almost 2 years ago
by
jartine
Adding `safetensors` variant of this model
#1 opened almost 2 years ago
by
SFconvertbot
Unable to quantize layers one at a time
2
#8 opened about 2 years ago
by
abhinavkulkarni
Unable to load the model.. Can you explain..
2
#7 opened about 2 years ago
by
jdc4429
Unable to load tokenizer
2
#5 opened about 2 years ago
by
abhinavkulkarni
Adding `safetensors` variant of this model
#1 opened over 2 years ago
by
SFconvertbot
70B AWQ model?
1
#2 opened over 2 years ago
by
Teja-Gollapudi
is it possible to make this work with huggingface/text-generation-inference ?
1
#5 opened over 2 years ago
by
captainst
The code provided in the model card does not work
3
#1 opened over 2 years ago
by
AnalogAiBert
AWQ 4-bit 128g version of open-llama-13b-open-instruct!
#3 opened over 2 years ago
by
abhinavkulkarni
AWQ 4-bit 128g version of open-llama-7b-open-instruct!
#1 opened over 2 years ago
by
abhinavkulkarni
AWQ 4-bit 128g version of falcon-7b-instruct!
❤️
2
#55 opened over 2 years ago
by
abhinavkulkarni
Inference
1
#1 opened over 2 years ago
by
bartowski
Fix q_group_size
1
#1 opened over 2 years ago
by
mike-ravkine
Thanks for this model!
4
#1 opened over 2 years ago
by
mike-ravkine
Basic question about 4-bit quantization
3
#18 opened over 2 years ago
by
abhinavkulkarni