| | --- |
| | license: apache-2.0 |
| | library_name: peft |
| | tags: |
| | - finetuned |
| | - multimodal |
| | - llava |
| | base_model: mistralai/Mistral-7B-Instruct-v0.1 |
| | dataset: sshh12/llava-finetune |
| | inference: false |
| | pipeline_tag: image-text-to-text |
| | --- |
| | |
| | These are weights for a version of `mistralai/Mistral-7B-Instruct-v0.1` finetuned for multimodal applications. |
| |
|
| | ### Modalities |
| |
|
| | * CLIPVisionModality (use `<image>` in text and provide `images`, encoded as 10 tokens) |
| |
|
| | ### Usage |
| |
|
| | GitHub: https://github.com/sshh12/multi_token (includes training scripts and basic inference server) |
| | |
| | ### Dataset |
| | |
| | sshh12/llava-finetune (544610 examples) |
| | |
| | ``` |
| | {'id': '000000033471', 'images': ['/data/llava_finetune_data/images/coco/train2017/train2017/000000033471.jpg'], 'messages': [{'content': '<image>\nWhat are the colors of the bus in the image?', 'role': 'user'}, {'content': 'The bus in the image is white and red.', 'role': 'assistant'}, {'content': 'What feature can be seen on the back of the bus?', 'role': 'user'}, {'content': 'The back of the bus features an advertisement.', 'role': 'assistant'}, {'content': 'Is the bus driving down the street or pulled off to the side?', 'role': 'user'}, {'content': 'The bus is driving down the street, which is crowded with people and other vehicles.', 'role': 'assistant'}]} |
| | ``` |
| | |
| | ### Training Device(s) |
| | |
| | ``` |
| | name, pci.bus_id, vbios_version |
| | NVIDIA RTX A6000, 00000000:02:00.0, 94.02.5C.00.02 |
| | ``` |
| | |
| | |
| | ### Model |
| | |
| | ``` |
| | MistralLMMForCausalLM.model = |
| | |
| | PeftModelForCausalLM( |
| | (base_model): LoraModel( |
| | (model): MistralLMMForCausalLM( |
| | (model): MistralLMMModel( |
| | (embed_tokens): Embedding(32000, 4096) |
| | (layers): ModuleList( |
| | (0-31): 32 x MistralDecoderLayer( |
| | (self_attn): MistralAttention( |
| | (q_proj): lora.Linear( |
| | (base_layer): Linear(in_features=4096, out_features=4096, bias=False) |
| | (lora_dropout): ModuleDict( |
| | (default): Dropout(p=0.05, inplace=False) |
| | ) |
| | (lora_A): ModuleDict( |
| | (default): Linear(in_features=4096, out_features=64, bias=False) |
| | ) |
| | (lora_B): ModuleDict( |
| | (default): Linear(in_features=64, out_features=4096, bias=False) |
| | ) |
| | (lora_embedding_A): ParameterDict() |
| | (lora_embedding_B): ParameterDict() |
| | ) |
| | (k_proj): lora.Linear( |
| | (base_layer): Linear(in_features=4096, out_features=1024, bias=False) |
| | (lora_dropout): ModuleDict( |
| | (default): Dropout(p=0.05, inplace=False) |
| | ) |
| | (lora_A): ModuleDict( |
| | (default): Linear(in_features=4096, out_features=64, bias=False) |
| | ) |
| | (lora_B): ModuleDict( |
| | (default): Linear(in_features=64, out_features=1024, bias=False) |
| | ) |
| | (lora_embedding_A): ParameterDict() |
| | (lora_embedding_B): ParameterDict() |
| | ) |
| | (v_proj): lora.Linear( |
| | (base_layer): Linear(in_features=4096, out_features=1024, bias=False) |
| | (lora_dropout): ModuleDict( |
| | (default): Dropout(p=0.05, inplace=False) |
| | ) |
| | (lora_A): ModuleDict( |
| | (default): Linear(in_features=4096, out_features=64, bias=False) |
| | ) |
| | (lora_B): ModuleDict( |
| | (default): Linear(in_features=64, out_features=1024, bias=False) |
| | ) |
| | (lora_embedding_A): ParameterDict() |
| | (lora_embedding_B): ParameterDict() |
| | ) |
| | (o_proj): lora.Linear( |
| | (base_layer): Linear(in_features=4096, out_features=4096, bias=False) |
| | (lora_dropout): ModuleDict( |
| | (default): Dropout(p=0.05, inplace=False) |
| | ) |
| | (lora_A): ModuleDict( |
| | (default): Linear(in_features=4096, out_features=64, bias=False) |
| | ) |
| | (lora_B): ModuleDict( |
| | (default): Linear(in_features=64, out_features=4096, bias=False) |
| | ) |
| | (lora_embedding_A): ParameterDict() |
| | (lora_embedding_B): ParameterDict() |
| | ) |
| | (rotary_emb): MistralRotaryEmbedding() |
| | ) |
| | (mlp): MistralMLP( |
| | (gate_proj): lora.Linear( |
| | (base_layer): Linear(in_features=4096, out_features=14336, bias=False) |
| | (lora_dropout): ModuleDict( |
| | (default): Dropout(p=0.05, inplace=False) |
| | ) |
| | (lora_A): ModuleDict( |
| | (default): Linear(in_features=4096, out_features=64, bias=False) |
| | ) |
| | (lora_B): ModuleDict( |
| | (default): Linear(in_features=64, out_features=14336, bias=False) |
| | ) |
| | (lora_embedding_A): ParameterDict() |
| | (lora_embedding_B): ParameterDict() |
| | ) |
| | (up_proj): lora.Linear( |
| | (base_layer): Linear(in_features=4096, out_features=14336, bias=False) |
| | (lora_dropout): ModuleDict( |
| | (default): Dropout(p=0.05, inplace=False) |
| | ) |
| | (lora_A): ModuleDict( |
| | (default): Linear(in_features=4096, out_features=64, bias=False) |
| | ) |
| | (lora_B): ModuleDict( |
| | (default): Linear(in_features=64, out_features=14336, bias=False) |
| | ) |
| | (lora_embedding_A): ParameterDict() |
| | (lora_embedding_B): ParameterDict() |
| | ) |
| | (down_proj): lora.Linear( |
| | (base_layer): Linear(in_features=14336, out_features=4096, bias=False) |
| | (lora_dropout): ModuleDict( |
| | (default): Dropout(p=0.05, inplace=False) |
| | ) |
| | (lora_A): ModuleDict( |
| | (default): Linear(in_features=14336, out_features=64, bias=False) |
| | ) |
| | (lora_B): ModuleDict( |
| | (default): Linear(in_features=64, out_features=4096, bias=False) |
| | ) |
| | (lora_embedding_A): ParameterDict() |
| | (lora_embedding_B): ParameterDict() |
| | ) |
| | (act_fn): SiLUActivation() |
| | ) |
| | (input_layernorm): MistralRMSNorm() |
| | (post_attention_layernorm): MistralRMSNorm() |
| | ) |
| | ) |
| | (norm): MistralRMSNorm() |
| | (vision_clip_lmm_projector): _MLPVectorProjector( |
| | (mlps): ModuleList( |
| | (0-9): 10 x Sequential( |
| | (0): Linear(in_features=1024, out_features=4096, bias=True) |
| | (1): GELU(approximate='none') |
| | (2): Linear(in_features=4096, out_features=4096, bias=True) |
| | ) |
| | ) |
| | ) |
| | ) |
| | (lm_head): Linear(in_features=4096, out_features=32000, bias=False) |
| | ) |
| | ) |
| | ) |
| | ``` |
| | |
| | ### Framework versions |
| |
|
| | - PEFT 0.7.0 |