Update README.md
Browse files
README.md
CHANGED
|
@@ -26,10 +26,10 @@ We investigate domain adaptation of MLLMs through post-training, focusing on dat
|
|
| 26 |
|
| 27 |
***************** **Updates** ********************
|
| 28 |
- [2024/12/10] Released evaluation benchmark datasets for biomedicine and food domains: [biomed-VQA-benchmark](https://huggingface.co/datasets/AdaptLLM/biomed-VQA-benchmark), [food-VQA-benchmark](https://huggingface.co/datasets/AdaptLLM/food-VQA-benchmark).
|
| 29 |
-
- [2024/12/9] Released AdaMLLM developed from llava-next-llama3-8b: [AdaMLLM-med-8B](https://huggingface.co/AdaptLLM/
|
| 30 |
- [2024/12/7] Released [visual-instruction-synthesizer](https://huggingface.co/AdaptLLM/visual-instruction-synthesizer) used to synthesize task triplets based on image-caption pairs.
|
| 31 |
-
- [2024/12/6] Released AdaMLLM developed from Qwen2-VL-2B and Llama-3.2-11B-Vision: [AdaMLLM-med-2B](https://huggingface.co/AdaptLLM/
|
| 32 |
-
- [2024/12/05] Released [biomedicine visual instructions](https://huggingface.co/datasets/AdaptLLM/
|
| 33 |
- [2024/11/29] Released our paper
|
| 34 |
|
| 35 |
|
|
@@ -39,11 +39,11 @@ We investigate domain adaptation of MLLMs through post-training, focusing on dat
|
|
| 39 |
| Model | Repo ID in HF 🤗 | Domain | Base Model | Training Data | Evaluation Benchmark |
|
| 40 |
|:----------------------------------------------------------------------------|:--------------------------------------------|:--------------|:-------------------------|:------------------------------------------------------------------------------------------------|-----------------------|
|
| 41 |
| [Visual Instruction Synthesizer](https://huggingface.co/AdaptLLM/visual-instruction-synthesizer) | AdaptLLM/visual-instruction-synthesizer | - | open-llava-next-llama3-8b | TBD | - |
|
| 42 |
-
| [AdaMLLM-med-2B](https://huggingface.co/AdaptLLM/
|
| 43 |
| [AdaMLLM-food-2B](https://huggingface.co/AdaptLLM/food-Qwen2-VL-2B-Instruct) | AdaptLLM/food-Qwen2-VL-2B-Instruct | Food | Qwen2-VL-2B-Instruct | TBD | [food-VQA-benchmark](https://huggingface.co/datasets/AdaptLLM/food-VQA-benchmark) |
|
| 44 |
-
| [AdaMLLM-med-8B](https://huggingface.co/AdaptLLM/
|
| 45 |
| [AdaMLLM-food-8B](https://huggingface.co/AdaptLLM/food-LLaVA-NeXT-Llama3-8B) |AdaptLLM/food-LLaVA-NeXT-Llama3-8B | Food | open-llava-next-llama3-8b | TBD | [food-VQA-benchmark](https://huggingface.co/datasets/AdaptLLM/food-VQA-benchmark) |
|
| 46 |
-
| [AdaMLLM-med-11B](https://huggingface.co/AdaptLLM/
|
| 47 |
| [AdaMLLM-food-11B](https://huggingface.co/AdaptLLM/food-Llama-3.2-11B-Vision-Instruct) | AdaptLLM/food-Llama-3.2-11B-Vision-Instruct | Food | Llama-3.2-11B-Vision-Instruct | TBD | [food-VQA-benchmark](https://huggingface.co/datasets/AdaptLLM/food-VQA-benchmark) |
|
| 48 |
|
| 49 |
|
|
|
|
| 26 |
|
| 27 |
***************** **Updates** ********************
|
| 28 |
- [2024/12/10] Released evaluation benchmark datasets for biomedicine and food domains: [biomed-VQA-benchmark](https://huggingface.co/datasets/AdaptLLM/biomed-VQA-benchmark), [food-VQA-benchmark](https://huggingface.co/datasets/AdaptLLM/food-VQA-benchmark).
|
| 29 |
+
- [2024/12/9] Released AdaMLLM developed from llava-next-llama3-8b: [AdaMLLM-med-8B](https://huggingface.co/AdaptLLM/biomed-LLaVA-NeXT-Llama3-8B), [AdaMLLM-food-8B](https://huggingface.co/AdaptLLM/food-LLaVA-NeXT-Llama3-8B).
|
| 30 |
- [2024/12/7] Released [visual-instruction-synthesizer](https://huggingface.co/AdaptLLM/visual-instruction-synthesizer) used to synthesize task triplets based on image-caption pairs.
|
| 31 |
+
- [2024/12/6] Released AdaMLLM developed from Qwen2-VL-2B and Llama-3.2-11B-Vision: [AdaMLLM-med-2B](https://huggingface.co/AdaptLLM/biomed-Qwen2-VL-2B-Instruct), [AdaMLLM-food-2B](https://huggingface.co/AdaptLLM/food-Qwen2-VL-2B-Instruct), [AdaMLLM-med-11B](https://huggingface.co/AdaptLLM/biomed-Llama-3.2-11B-Vision-Instruct), [AdaMLLM-food-11B](https://huggingface.co/AdaptLLM/food-Llama-3.2-11B-Vision-Instruct),
|
| 32 |
+
- [2024/12/05] Released [biomedicine visual instructions](https://huggingface.co/datasets/AdaptLLM/biomed-visual-instructions) for post-training MLLMs
|
| 33 |
- [2024/11/29] Released our paper
|
| 34 |
|
| 35 |
|
|
|
|
| 39 |
| Model | Repo ID in HF 🤗 | Domain | Base Model | Training Data | Evaluation Benchmark |
|
| 40 |
|:----------------------------------------------------------------------------|:--------------------------------------------|:--------------|:-------------------------|:------------------------------------------------------------------------------------------------|-----------------------|
|
| 41 |
| [Visual Instruction Synthesizer](https://huggingface.co/AdaptLLM/visual-instruction-synthesizer) | AdaptLLM/visual-instruction-synthesizer | - | open-llava-next-llama3-8b | TBD | - |
|
| 42 |
+
| [AdaMLLM-med-2B](https://huggingface.co/AdaptLLM/biomed-Qwen2-VL-2B-Instruct) | AdaptLLM/biomed-Qwen2-VL-2B-Instruct | Biomedicine | Qwen2-VL-2B-Instruct | [biomed-visual-instructions](https://huggingface.co/datasets/AdaptLLM/biomed-visual-instructions) | [biomed-VQA-benchmark](https://huggingface.co/datasets/AdaptLLM/biomed-VQA-benchmark) |
|
| 43 |
| [AdaMLLM-food-2B](https://huggingface.co/AdaptLLM/food-Qwen2-VL-2B-Instruct) | AdaptLLM/food-Qwen2-VL-2B-Instruct | Food | Qwen2-VL-2B-Instruct | TBD | [food-VQA-benchmark](https://huggingface.co/datasets/AdaptLLM/food-VQA-benchmark) |
|
| 44 |
+
| [AdaMLLM-med-8B](https://huggingface.co/AdaptLLM/biomed-LLaVA-NeXT-Llama3-8B) | AdaptLLM/biomed-LLaVA-NeXT-Llama3-8B | Biomedicine | open-llava-next-llama3-8b | [biomed-visual-instructions](https://huggingface.co/datasets/AdaptLLM/biomed-visual-instructions) | [biomed-VQA-benchmark](https://huggingface.co/datasets/AdaptLLM/biomed-VQA-benchmark) |
|
| 45 |
| [AdaMLLM-food-8B](https://huggingface.co/AdaptLLM/food-LLaVA-NeXT-Llama3-8B) |AdaptLLM/food-LLaVA-NeXT-Llama3-8B | Food | open-llava-next-llama3-8b | TBD | [food-VQA-benchmark](https://huggingface.co/datasets/AdaptLLM/food-VQA-benchmark) |
|
| 46 |
+
| [AdaMLLM-med-11B](https://huggingface.co/AdaptLLM/biomed-Llama-3.2-11B-Vision-Instruct) | AdaptLLM/biomed-Llama-3.2-11B-Vision-Instruct | Biomedicine | Llama-3.2-11B-Vision-Instruct | [biomed-visual-instructions](https://huggingface.co/datasets/AdaptLLM/biomed-visual-instructions) | [biomed-VQA-benchmark](https://huggingface.co/datasets/AdaptLLM/biomed-VQA-benchmark) |
|
| 47 |
| [AdaMLLM-food-11B](https://huggingface.co/AdaptLLM/food-Llama-3.2-11B-Vision-Instruct) | AdaptLLM/food-Llama-3.2-11B-Vision-Instruct | Food | Llama-3.2-11B-Vision-Instruct | TBD | [food-VQA-benchmark](https://huggingface.co/datasets/AdaptLLM/food-VQA-benchmark) |
|
| 48 |
|
| 49 |
|