Update README.md
Browse files
README.md
CHANGED
|
@@ -25,7 +25,7 @@ We investigate domain adaptation of MLLMs through post-training, focusing on dat
|
|
| 25 |
|
| 26 |
|
| 27 |
***************** **Updates** ********************
|
| 28 |
-
- [2024/12/9] Released AdaMLLM developed from llava-next-llama3-8b: [AdaMLLM-med-8B](AdaptLLM/medicine-LLaVA-NeXT-Llama3-8B), [AdaMLLM-food-8B](AdaptLLM/food-LLaVA-NeXT-Llama3-8B).
|
| 29 |
- [2024/12/7] Released [visual-instruction-synthesizer](https://huggingface.co/AdaptLLM/visual-instruction-synthesizer) used to synthesize task triplets based on image-caption pairs.
|
| 30 |
- [2024/12/6] Released AdaMLLM developed from Qwen2-VL-2B and Llama-3.2-11B-Vision: [AdaMLLM-med-2B](https://huggingface.co/AdaptLLM/medicine-Qwen2-VL-2B-Instruct), [AdaMLLM-food-2B](https://huggingface.co/AdaptLLM/food-Qwen2-VL-2B-Instruct), [AdaMLLM-med-11B](https://huggingface.co/AdaptLLM/medicine-Llama-3.2-11B-Vision-Instruct), [AdaMLLM-food-11B](https://huggingface.co/AdaptLLM/food-Llama-3.2-11B-Vision-Instruct),
|
| 31 |
- [2024/12/05] Released [biomedicine visual instructions](https://huggingface.co/datasets/AdaptLLM/medicine-visual-instructions) for post-training MLLMs
|
|
@@ -33,7 +33,7 @@ We investigate domain adaptation of MLLMs through post-training, focusing on dat
|
|
| 33 |
|
| 34 |
|
| 35 |
## Resources
|
| 36 |
-
| Model
|
| 37 |
|:----------------------------------------------------------------------------|:--------------------------------------------|:--------------|:-------------------------|:------------------------------------------------------------------------------------------------|-----------------------|
|
| 38 |
| [Visual Instruction Synthesizer](https://huggingface.co/AdaptLLM/visual-instruction-synthesizer) | AdaptLLM/visual-instruction-synthesizer | - | open-llava-next-llama3-8b | TBD | - |
|
| 39 |
| [AdaMLLM-med-2B](https://huggingface.co/AdaptLLM/medicine-Qwen2-VL-2B-Instruct) | AdaptLLM/medicine-Qwen2-VL-2B-Instruct | Biomedicine | Qwen2-VL-2B-Instruct | [medicine-visual-instructions](https://huggingface.co/datasets/AdaptLLM/medicine-visual-instructions) | TBD |
|
|
@@ -43,8 +43,6 @@ We investigate domain adaptation of MLLMs through post-training, focusing on dat
|
|
| 43 |
| [AdaMLLM-med-11B](https://huggingface.co/AdaptLLM/medicine-Llama-3.2-11B-Vision-Instruct) | AdaptLLM/medicine-Llama-3.2-11B-Vision-Instruct | Biomedicine | Llama-3.2-11B-Vision-Instruct | [medicine-visual-instructions](https://huggingface.co/datasets/AdaptLLM/medicine-visual-instructions) | TBD |
|
| 44 |
| [AdaMLLM-food-11B](https://huggingface.co/AdaptLLM/food-Llama-3.2-11B-Vision-Instruct) | AdaptLLM/food-Llama-3.2-11B-Vision-Instruct | Food | Llama-3.2-11B-Vision-Instruct | TBD | TBD |
|
| 45 |
|
| 46 |
-
|
| 47 |
-
|
| 48 |
## About
|
| 49 |
|
| 50 |
AdaMLLM represents our latest advancement in building domain-specific foundation models through post-training on synthetic supervised tasks derived from unsupervised contexts.
|
|
|
|
| 25 |
|
| 26 |
|
| 27 |
***************** **Updates** ********************
|
| 28 |
+
- [2024/12/9] Released AdaMLLM developed from llava-next-llama3-8b: [AdaMLLM-med-8B](https://huggingface.co/AdaptLLM/medicine-LLaVA-NeXT-Llama3-8B), [AdaMLLM-food-8B](https://huggingface.co/AdaptLLM/food-LLaVA-NeXT-Llama3-8B).
|
| 29 |
- [2024/12/7] Released [visual-instruction-synthesizer](https://huggingface.co/AdaptLLM/visual-instruction-synthesizer) used to synthesize task triplets based on image-caption pairs.
|
| 30 |
- [2024/12/6] Released AdaMLLM developed from Qwen2-VL-2B and Llama-3.2-11B-Vision: [AdaMLLM-med-2B](https://huggingface.co/AdaptLLM/medicine-Qwen2-VL-2B-Instruct), [AdaMLLM-food-2B](https://huggingface.co/AdaptLLM/food-Qwen2-VL-2B-Instruct), [AdaMLLM-med-11B](https://huggingface.co/AdaptLLM/medicine-Llama-3.2-11B-Vision-Instruct), [AdaMLLM-food-11B](https://huggingface.co/AdaptLLM/food-Llama-3.2-11B-Vision-Instruct),
|
| 31 |
- [2024/12/05] Released [biomedicine visual instructions](https://huggingface.co/datasets/AdaptLLM/medicine-visual-instructions) for post-training MLLMs
|
|
|
|
| 33 |
|
| 34 |
|
| 35 |
## Resources
|
| 36 |
+
| Model | Repo ID in HF 🤗 | Domain | Base Model | Training Data | Evaluation Benchmark |
|
| 37 |
|:----------------------------------------------------------------------------|:--------------------------------------------|:--------------|:-------------------------|:------------------------------------------------------------------------------------------------|-----------------------|
|
| 38 |
| [Visual Instruction Synthesizer](https://huggingface.co/AdaptLLM/visual-instruction-synthesizer) | AdaptLLM/visual-instruction-synthesizer | - | open-llava-next-llama3-8b | TBD | - |
|
| 39 |
| [AdaMLLM-med-2B](https://huggingface.co/AdaptLLM/medicine-Qwen2-VL-2B-Instruct) | AdaptLLM/medicine-Qwen2-VL-2B-Instruct | Biomedicine | Qwen2-VL-2B-Instruct | [medicine-visual-instructions](https://huggingface.co/datasets/AdaptLLM/medicine-visual-instructions) | TBD |
|
|
|
|
| 43 |
| [AdaMLLM-med-11B](https://huggingface.co/AdaptLLM/medicine-Llama-3.2-11B-Vision-Instruct) | AdaptLLM/medicine-Llama-3.2-11B-Vision-Instruct | Biomedicine | Llama-3.2-11B-Vision-Instruct | [medicine-visual-instructions](https://huggingface.co/datasets/AdaptLLM/medicine-visual-instructions) | TBD |
|
| 44 |
| [AdaMLLM-food-11B](https://huggingface.co/AdaptLLM/food-Llama-3.2-11B-Vision-Instruct) | AdaptLLM/food-Llama-3.2-11B-Vision-Instruct | Food | Llama-3.2-11B-Vision-Instruct | TBD | TBD |
|
| 45 |
|
|
|
|
|
|
|
| 46 |
## About
|
| 47 |
|
| 48 |
AdaMLLM represents our latest advancement in building domain-specific foundation models through post-training on synthetic supervised tasks derived from unsupervised contexts.
|