Datasets:
NO pretrained mm_projector.bin
i can't find mm_projector.bin file ,when i infer ,it occurs "RuntimeError: Expected 3D (unbatched) or 4D (batched) input to conv2d, but got input of size: [1, 1, 24, 24, 24, 512]"
Have you been able to run inference with the model? It appears the authors released weights for the vision encoder and the LLM only, without providing the mm_projector weights.
Does anyone know how to be able to use the mm_projector weigts? or are they just not provided?
Does anyone know how to be able to use the mm_projector weigts? or are they just not provided?
They did not provide the mm_projector weights separately, but you can obtain the complete MLLM weights by merging the Hugging Face weights with those of the base model, and then save the mm_projector weights separately.
could you please tell me how to do that?
Thanks, I sabed the mm_projector weghts separately, however a I must not have the complete MLLM weights correctly merged because a warning of uninitialized weights pops up, could you please indicate how I sould do so?
Thank you
please tell me how to merging the Hugging Face weights with the base model weights.Thanks! It is important for me!
I performed inference based on the CT-CHAT/llava/serve/ctchat_validation_llama.py script.
The --model-path parameter was set to CT-RATE/models/CT-CHAT/llava-lora-llama_3.1_70b (which is a renamed version of CT-RATE/models/CT-CHAT/llama_3.1_70b from Hugging Face), and the --model-base was set to meta-llama/Llama-3.1-70B.
I hope this information is helpful to you!
Thank you for your reply. But does your model support multimodal reasoning? I think you should use this builder under CT-CHAT/llava/model/multimodal_projector/builder.py to generate mmproject.bin and then select modelbase as the model that supports multimodal reasoning. Thanks again
Thank you for your reply. But does your model support multimodal reasoning? I think you should use this builder under CT-CHAT/llava/model/multimodal_projector/builder.py to generate mmproject.bin and then select modelbase as the model that supports multimodal reasoning. Thanks again
I'm not sure whether it's still necessary to use the CT-CHAT/llava/model/multimodal_projector/builder.py script. In principle, would the approach described in https://github.com/haotian-liu/LLaVA#launch-a-model-worker-lora-weights-unmerged for loading unmerged LoRA weights already be sufficient?
Thank you for your answer. I tried to go to Modelscope in the afternoon and downloaded llama-8b as modelbase, I renamed it llava-lora-llama-8b and added it to the inference code, but I get an error:
2025-05-29 17:15:29 | ERROR | stderr | FileNotFoundError: [Errno 2] No such file or directory: '/yinghepool/yinghe/Public_data/CT-RATE/models/CT-CHAT/llava-lora-llama_3.1_8b/mm_projector.bin', my inference command is: CUDA_VISIBLE_ DEVICES=7 python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40000 --worker http://localhost:40000 --model-path "/yinghepool/yinghe/Public_data/CT-RATE/models/CT-CHAT/llava-lora-llama_3.1_8b/" --model-base "/yinghepool/jiawei/basemodel/LLM-Research/Meta-Llama-3-8B" --model-name llava --multi-modal, I tried to write the following script to generate it mmprojector.bin but I ran into a problem with a lot of NAN and INF values mapped out. I'm pasting my conversion script below. I don't know how to run up ctchat reasoning on gradio, and how to properly generate mmprojector? Please also answer. Thanks again.
import os
import torch
from transformers import AutoTokenizer, AutoConfig
from safetensors.torch import load_file
from llava.model.language_model.llava_llama import LlavaLlamaForCausalLM
from llava.model.multimodal_projector.builder import build_vision_projector
base_model_path = "/yinghepool/jiawei/basemodel/llava-hf/llava-lora-llama1___5-7b-hf/"
lora_model_path = "/yinghepool/yinghe/Public_data/CT-RATE/models/CT-CHAT/llava-lora-llama_3.1_8b/"
output_mm_path = os.path.join(lora_model_path, "mm_projector.bin")
Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained(base_model_path, use_fast=False)
model = LlavaLlamaForCausalLM.from_pretrained(base_model_path, torch_dtype=torch.float16)
Fix config based on actual config + projector logic
model.config.hidden_size = model.config.text_config.hidden_size # 1024
model.config.hidden_size = model.config.text_config["hidden_size"] # likely 4096
model.config.mm_context_size = (model.config.vision_config["image_size"] // model.config.vision_config["patch_size"]) ** 2 # 576
model.config.mm_projector_type = 'attn_pool+mlp2x_gelu' # match non_lora_trainables.bin
Construct projector
vision_projector = build_vision_projector(model.config)
Load state dict
non_lora_path = os.path.join(lora_model_path, "non_lora_trainables.bin")
if not os.path.exists(non_lora_path):
raise FileNotFoundError("non_lora_trainables.bin not found.")
full_state = torch.load(non_lora_path, map_location="cpu")
proj_state = {k.replace("vision_projector.", ""): v for k, v in full_state.items() if k.startswith("vision_projector.")}
missing_keys = vision_projector.load_state_dict(proj_state, strict=False)
print("Loaded vision projector. Missing keys:", missing_keys)
Save projector weights
torch.save(vision_projector.state_dict(), output_mm_path)
print(f"Saved mm_projector.bin to: {output_mm_path}")
Also the solution you mentioned, I tried. I renamed the modelpath model, which is the lora model uploaded by the author, to llava-lora-llama, and it still can't be run.
Have you successfully run CT-CHAT/llava/serve/ctchat_validation_llama.py? I recommend separating the mm_projector weights only after you've confirmed that this script runs properly.