Request for Model Implementation Code (LlavaLlamaForCausalLM)

#2
by WiningByNow - opened

Certainly!
Here's a professional and polite English request you can use to contact the model author or post an issue on the Hugging Face model page (e.g., for the model ppxin321/HolmesVAD-7B):
​​Subject:​​ Request for Model Implementation Code (LlavaLlamaForCausalLM)
Dear Model Author (ppxin321),
I hope this message finds you well.
I’m currently exploring the model ​​HolmesVAD-7B​​ hosted at https://huggingface.co/ppxin321/HolmesVAD-7B, which looks like a promising multimodal model based on its configuration ("architectures": ["LlavaLlamaForCausalLM"]).
However, I noticed that the model repository does not include the corresponding Python implementation files (e.g., modeling_llava.pyor the LlavaLlamaForCausalLMclass definition). As a result, when attempting to load the model using Hugging Face’s from_pretrained()method with an imagesinput (as expected for a LLaVA-like setup), I encounter an error because the required model class (LlavaLlamaForCausalLM) is not available.
Could you kindly share the implementation code for the LlavaLlamaForCausalLMclass (or let me know which class corresponds to the "LlavaLlamaForCausalLM"architecture listed in the config)? Ideally, access to the modeling script (e.g., modeling_llava.py) or guidance on how to properly load and use this model for multimodal (text + image) inference would be very helpful.
If the code is available elsewhere (e.g., on GitHub or in a private repo), I’d greatly appreciate a link or instructions to access it.
Thank you very much for your time and for sharing this model. I’m looking forward to your response.
Best regards

kakakaka

Sign up or log in to comment