Update README.md
Browse files
README.md
CHANGED
|
@@ -33,6 +33,8 @@ We investigate domain adaptation of MLLMs through post-training, focusing on dat
|
|
| 33 |
|
| 34 |
|
| 35 |
## Resources
|
|
|
|
|
|
|
| 36 |
| Model | Repo ID in HF 🤗 | Domain | Base Model | Training Data | Evaluation Benchmark |
|
| 37 |
|:----------------------------------------------------------------------------|:--------------------------------------------|:--------------|:-------------------------|:------------------------------------------------------------------------------------------------|-----------------------|
|
| 38 |
| [Visual Instruction Synthesizer](https://huggingface.co/AdaptLLM/visual-instruction-synthesizer) | AdaptLLM/visual-instruction-synthesizer | - | open-llava-next-llama3-8b | TBD | - |
|
|
@@ -43,6 +45,9 @@ We investigate domain adaptation of MLLMs through post-training, focusing on dat
|
|
| 43 |
| [AdaMLLM-med-11B](https://huggingface.co/AdaptLLM/medicine-Llama-3.2-11B-Vision-Instruct) | AdaptLLM/medicine-Llama-3.2-11B-Vision-Instruct | Biomedicine | Llama-3.2-11B-Vision-Instruct | [medicine-visual-instructions](https://huggingface.co/datasets/AdaptLLM/medicine-visual-instructions) | TBD |
|
| 44 |
| [AdaMLLM-food-11B](https://huggingface.co/AdaptLLM/food-Llama-3.2-11B-Vision-Instruct) | AdaptLLM/food-Llama-3.2-11B-Vision-Instruct | Food | Llama-3.2-11B-Vision-Instruct | TBD | TBD |
|
| 45 |
|
|
|
|
|
|
|
|
|
|
| 46 |
## About
|
| 47 |
|
| 48 |
AdaMLLM represents our latest advancement in building domain-specific foundation models through post-training on synthetic supervised tasks derived from unsupervised contexts.
|
|
@@ -59,6 +64,7 @@ AdaMLLM represents our latest advancement in building domain-specific foundation
|
|
| 59 |
We extend supervised task synthesis to multimodality, introducing a unified visual instruction synthesizer to extract instruction-response pairs from domain-specific image-caption pairs. Our synthetic tasks outperform those generated by manual rules, GPT-4, and GPT-4V in improving domain-specific performance for MLLMs.
|
| 60 |
|
| 61 |
|
|
|
|
| 62 |
## Citation
|
| 63 |
If you find our work helpful, please cite us.
|
| 64 |
|
|
|
|
| 33 |
|
| 34 |
|
| 35 |
## Resources
|
| 36 |
+
**🤗 We share our data and models with example usages, feel free to open any issues or discussions! 🤗**
|
| 37 |
+
|
| 38 |
| Model | Repo ID in HF 🤗 | Domain | Base Model | Training Data | Evaluation Benchmark |
|
| 39 |
|:----------------------------------------------------------------------------|:--------------------------------------------|:--------------|:-------------------------|:------------------------------------------------------------------------------------------------|-----------------------|
|
| 40 |
| [Visual Instruction Synthesizer](https://huggingface.co/AdaptLLM/visual-instruction-synthesizer) | AdaptLLM/visual-instruction-synthesizer | - | open-llava-next-llama3-8b | TBD | - |
|
|
|
|
| 45 |
| [AdaMLLM-med-11B](https://huggingface.co/AdaptLLM/medicine-Llama-3.2-11B-Vision-Instruct) | AdaptLLM/medicine-Llama-3.2-11B-Vision-Instruct | Biomedicine | Llama-3.2-11B-Vision-Instruct | [medicine-visual-instructions](https://huggingface.co/datasets/AdaptLLM/medicine-visual-instructions) | TBD |
|
| 46 |
| [AdaMLLM-food-11B](https://huggingface.co/AdaptLLM/food-Llama-3.2-11B-Vision-Instruct) | AdaptLLM/food-Llama-3.2-11B-Vision-Instruct | Food | Llama-3.2-11B-Vision-Instruct | TBD | TBD |
|
| 47 |
|
| 48 |
+
## Contact
|
| 49 |
+
Daixuan Cheng: `daixuancheng6@gmail.com`
|
| 50 |
+
|
| 51 |
## About
|
| 52 |
|
| 53 |
AdaMLLM represents our latest advancement in building domain-specific foundation models through post-training on synthetic supervised tasks derived from unsupervised contexts.
|
|
|
|
| 64 |
We extend supervised task synthesis to multimodality, introducing a unified visual instruction synthesizer to extract instruction-response pairs from domain-specific image-caption pairs. Our synthetic tasks outperform those generated by manual rules, GPT-4, and GPT-4V in improving domain-specific performance for MLLMs.
|
| 65 |
|
| 66 |
|
| 67 |
+
|
| 68 |
## Citation
|
| 69 |
If you find our work helpful, please cite us.
|
| 70 |
|