AdaptLLM commited on
Commit
c2ba972
·
verified ·
1 Parent(s): b23df41

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -0
README.md CHANGED
@@ -25,6 +25,7 @@ We investigate domain adaptation of MLLMs through post-training, focusing on dat
25
 
26
 
27
  ### Updates
 
28
  - [2024/12/6] Released AdaMLLM developed from Qwen2-VL-2B and Llama-3.2-11B-Vision: [AdaMLLM-med-2B](https://huggingface.co/AdaptLLM/medicine-Qwen2-VL-2B-Instruct), [AdaMLLM-food-2B](https://huggingface.co/AdaptLLM/food-Qwen2-VL-2B-Instruct), [AdaMLLM-med-11B](https://huggingface.co/AdaptLLM/medicine-Llama-3.2-11B-Vision-Instruct), [AdaMLLM-food-11B](https://huggingface.co/AdaptLLM/food-Llama-3.2-11B-Vision-Instruct),
29
  - [2024/12/05] Released [biomedicine visual instructions](https://huggingface.co/datasets/AdaptLLM/medicine-visual-instructions) for post-training MLLMs
30
  - [2024/11/29] Released our paper
 
25
 
26
 
27
  ### Updates
28
+ - [2024/12/7] Released [visual-instruction-synthesizer](https://huggingface.co/AdaptLLM/visual-instruction-synthesizer) used to synthesize task triplets based on image-caption pairs.
29
  - [2024/12/6] Released AdaMLLM developed from Qwen2-VL-2B and Llama-3.2-11B-Vision: [AdaMLLM-med-2B](https://huggingface.co/AdaptLLM/medicine-Qwen2-VL-2B-Instruct), [AdaMLLM-food-2B](https://huggingface.co/AdaptLLM/food-Qwen2-VL-2B-Instruct), [AdaMLLM-med-11B](https://huggingface.co/AdaptLLM/medicine-Llama-3.2-11B-Vision-Instruct), [AdaMLLM-food-11B](https://huggingface.co/AdaptLLM/food-Llama-3.2-11B-Vision-Instruct),
30
  - [2024/12/05] Released [biomedicine visual instructions](https://huggingface.co/datasets/AdaptLLM/medicine-visual-instructions) for post-training MLLMs
31
  - [2024/11/29] Released our paper