Update README.md
Browse files
README.md
CHANGED
|
@@ -13,6 +13,7 @@ tags:
|
|
| 13 |
<p align="center" float="left">
|
| 14 |
|
| 15 |
<p align="center">Publish at Findings of ACL 2023</p>
|
|
|
|
| 16 |
|
| 17 |
[]()
|
| 18 |
[]()
|
|
@@ -23,6 +24,8 @@ tags:
|
|
| 23 |
</p>
|
| 24 |
Illustration of our proposed InfoDCL framework. We exploit distant/surrogate labels (i.e., emojis) to supervise two contrastive losses, corpus-aware contrastive loss (CCL) and Light label-aware contrastive loss (LCL-LiT). Sequence representations from our model should keep the cluster of each class distinguishable and preserve semantic relationships between classes.
|
| 25 |
|
|
|
|
|
|
|
| 26 |
## Checkpoints of Models Pre-Trained with InfoDCL
|
| 27 |
* InfoDCL-RoBERTa trained with TweetEmoji-EN: https://huggingface.co/UBC-NLP/InfoDCL-emoji
|
| 28 |
* InfoDCL-RoBERTa trained with TweetHashtag-EN: https://huggingface.co/UBC-NLP/InfoDCL-hashtag
|
|
|
|
| 13 |
<p align="center" float="left">
|
| 14 |
|
| 15 |
<p align="center">Publish at Findings of ACL 2023</p>
|
| 16 |
+
<p align="center"> <a href="https://arxiv.org/abs/2203.07648" target="_blank">Paper</a></p>
|
| 17 |
|
| 18 |
[]()
|
| 19 |
[]()
|
|
|
|
| 24 |
</p>
|
| 25 |
Illustration of our proposed InfoDCL framework. We exploit distant/surrogate labels (i.e., emojis) to supervise two contrastive losses, corpus-aware contrastive loss (CCL) and Light label-aware contrastive loss (LCL-LiT). Sequence representations from our model should keep the cluster of each class distinguishable and preserve semantic relationships between classes.
|
| 26 |
|
| 27 |
+
|
| 28 |
+
|
| 29 |
## Checkpoints of Models Pre-Trained with InfoDCL
|
| 30 |
* InfoDCL-RoBERTa trained with TweetEmoji-EN: https://huggingface.co/UBC-NLP/InfoDCL-emoji
|
| 31 |
* InfoDCL-RoBERTa trained with TweetHashtag-EN: https://huggingface.co/UBC-NLP/InfoDCL-hashtag
|