AdaptLLM commited on
Commit
cdad099
·
verified ·
1 Parent(s): 1e76a5b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -37,7 +37,7 @@ AdaMLLM represents our latest advancement in building domain-specific foundation
37
  - **[AdaptLLM](https://huggingface.co/papers/2309.09530): Adapt LLM to domains**
38
  We employ rule-based methods to extract tasks from domain-specific corpora, reformatting them into reading comprehension tasks for continued pre-training. Our 7B finance model outperforms domain-specific models of much larger scales, such as BloombergGPT-50B.
39
 
40
- - **AdaMLLM: Adapt MLLM to domains**
41
  We extend supervised task synthesis to multimodality, introducing a unified visual instruction synthesizer to extract instruction-response pairs from domain-specific image-caption pairs. Our synthetic tasks outperform those generated by manual rules, GPT-4, and GPT-4V in improving domain-specific performance for MLLMs.
42
 
43
 
 
37
  - **[AdaptLLM](https://huggingface.co/papers/2309.09530): Adapt LLM to domains**
38
  We employ rule-based methods to extract tasks from domain-specific corpora, reformatting them into reading comprehension tasks for continued pre-training. Our 7B finance model outperforms domain-specific models of much larger scales, such as BloombergGPT-50B.
39
 
40
+ - **AdaMLLM: Adapt Multimodal LLM to domains**
41
  We extend supervised task synthesis to multimodality, introducing a unified visual instruction synthesizer to extract instruction-response pairs from domain-specific image-caption pairs. Our synthetic tasks outperform those generated by manual rules, GPT-4, and GPT-4V in improving domain-specific performance for MLLMs.
42
 
43