AdaptLLM commited on
Commit
1e76a5b
·
verified ·
1 Parent(s): 12aa561

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -34,10 +34,10 @@ AdaMLLM represents our latest advancement in building domain-specific foundation
34
  </p>
35
 
36
 
37
- - [AdaptLLM](https://huggingface.co/papers/2309.09530)
38
  We employ rule-based methods to extract tasks from domain-specific corpora, reformatting them into reading comprehension tasks for continued pre-training. Our 7B finance model outperforms domain-specific models of much larger scales, such as BloombergGPT-50B.
39
 
40
- - AdaMLLM
41
  We extend supervised task synthesis to multimodality, introducing a unified visual instruction synthesizer to extract instruction-response pairs from domain-specific image-caption pairs. Our synthetic tasks outperform those generated by manual rules, GPT-4, and GPT-4V in improving domain-specific performance for MLLMs.
42
 
43
 
 
34
  </p>
35
 
36
 
37
+ - **[AdaptLLM](https://huggingface.co/papers/2309.09530): Adapt LLM to domains**
38
  We employ rule-based methods to extract tasks from domain-specific corpora, reformatting them into reading comprehension tasks for continued pre-training. Our 7B finance model outperforms domain-specific models of much larger scales, such as BloombergGPT-50B.
39
 
40
+ - **AdaMLLM: Adapt MLLM to domains**
41
  We extend supervised task synthesis to multimodality, introducing a unified visual instruction synthesizer to extract instruction-response pairs from domain-specific image-caption pairs. Our synthetic tasks outperform those generated by manual rules, GPT-4, and GPT-4V in improving domain-specific performance for MLLMs.
42
 
43