Add Batch e461fa66-6a77-47f1-9d55-1152e5f2131c data
Browse filesThis view is limited to 50 files because it contains too many changes. See raw diff
- .gitattributes +64 -0
- 2024/AceGPT, Localizing Large Language Models in Arabic/b23f89dd-257d-44d2-8db7-563036341ed5_content_list.json +0 -0
- 2024/AceGPT, Localizing Large Language Models in Arabic/b23f89dd-257d-44d2-8db7-563036341ed5_model.json +0 -0
- 2024/AceGPT, Localizing Large Language Models in Arabic/b23f89dd-257d-44d2-8db7-563036341ed5_origin.pdf +3 -0
- 2024/AceGPT, Localizing Large Language Models in Arabic/full.md +0 -0
- 2024/AceGPT, Localizing Large Language Models in Arabic/images.zip +3 -0
- 2024/AceGPT, Localizing Large Language Models in Arabic/layout.json +0 -0
- 2024/Actively Learn from LLMs with Uncertainty Propagation for Generalized Category Discovery/3e184059-006f-40ea-abc5-8e4f9a6e51eb_content_list.json +2194 -0
- 2024/Actively Learn from LLMs with Uncertainty Propagation for Generalized Category Discovery/3e184059-006f-40ea-abc5-8e4f9a6e51eb_model.json +0 -0
- 2024/Actively Learn from LLMs with Uncertainty Propagation for Generalized Category Discovery/3e184059-006f-40ea-abc5-8e4f9a6e51eb_origin.pdf +3 -0
- 2024/Actively Learn from LLMs with Uncertainty Propagation for Generalized Category Discovery/full.md +402 -0
- 2024/Actively Learn from LLMs with Uncertainty Propagation for Generalized Category Discovery/images.zip +3 -0
- 2024/Actively Learn from LLMs with Uncertainty Propagation for Generalized Category Discovery/layout.json +0 -0
- 2024/Ada-LEval_ Evaluating long-context LLMs with length-adaptable benchmarks/8df20923-51bb-4d14-9407-d091f93cc639_content_list.json +2244 -0
- 2024/Ada-LEval_ Evaluating long-context LLMs with length-adaptable benchmarks/8df20923-51bb-4d14-9407-d091f93cc639_model.json +0 -0
- 2024/Ada-LEval_ Evaluating long-context LLMs with length-adaptable benchmarks/8df20923-51bb-4d14-9407-d091f93cc639_origin.pdf +3 -0
- 2024/Ada-LEval_ Evaluating long-context LLMs with length-adaptable benchmarks/full.md +377 -0
- 2024/Ada-LEval_ Evaluating long-context LLMs with length-adaptable benchmarks/images.zip +3 -0
- 2024/Ada-LEval_ Evaluating long-context LLMs with length-adaptable benchmarks/layout.json +0 -0
- 2024/Adaptive Cross-lingual Text Classification through In-Context One-Shot Demonstrations/ae152a93-ecfb-4a6c-b85d-f885c8997f05_content_list.json +0 -0
- 2024/Adaptive Cross-lingual Text Classification through In-Context One-Shot Demonstrations/ae152a93-ecfb-4a6c-b85d-f885c8997f05_model.json +0 -0
- 2024/Adaptive Cross-lingual Text Classification through In-Context One-Shot Demonstrations/ae152a93-ecfb-4a6c-b85d-f885c8997f05_origin.pdf +3 -0
- 2024/Adaptive Cross-lingual Text Classification through In-Context One-Shot Demonstrations/full.md +422 -0
- 2024/Adaptive Cross-lingual Text Classification through In-Context One-Shot Demonstrations/images.zip +3 -0
- 2024/Adaptive Cross-lingual Text Classification through In-Context One-Shot Demonstrations/layout.json +0 -0
- 2024/Adaptive Rank Selections for Low-Rank Approximation of Language Models/d9c0afc9-16d2-4fb1-992a-9475f26df5e4_content_list.json +0 -0
- 2024/Adaptive Rank Selections for Low-Rank Approximation of Language Models/d9c0afc9-16d2-4fb1-992a-9475f26df5e4_model.json +0 -0
- 2024/Adaptive Rank Selections for Low-Rank Approximation of Language Models/d9c0afc9-16d2-4fb1-992a-9475f26df5e4_origin.pdf +3 -0
- 2024/Adaptive Rank Selections for Low-Rank Approximation of Language Models/full.md +483 -0
- 2024/Adaptive Rank Selections for Low-Rank Approximation of Language Models/images.zip +3 -0
- 2024/Adaptive Rank Selections for Low-Rank Approximation of Language Models/layout.json +0 -0
- 2024/Adaptive-RAG_ Learning to Adapt Retrieval-Augmented Large Language Models through Question Complexity/59b18015-e896-4681-bd77-ccb145a03e89_content_list.json +0 -0
- 2024/Adaptive-RAG_ Learning to Adapt Retrieval-Augmented Large Language Models through Question Complexity/59b18015-e896-4681-bd77-ccb145a03e89_model.json +0 -0
- 2024/Adaptive-RAG_ Learning to Adapt Retrieval-Augmented Large Language Models through Question Complexity/59b18015-e896-4681-bd77-ccb145a03e89_origin.pdf +3 -0
- 2024/Adaptive-RAG_ Learning to Adapt Retrieval-Augmented Large Language Models through Question Complexity/full.md +313 -0
- 2024/Adaptive-RAG_ Learning to Adapt Retrieval-Augmented Large Language Models through Question Complexity/images.zip +3 -0
- 2024/Adaptive-RAG_ Learning to Adapt Retrieval-Augmented Large Language Models through Question Complexity/layout.json +0 -0
- 2024/Adjusting Interpretable Dimensions in Embedding Space with Human Judgments/3179a1d7-f1eb-4b22-bb06-32f6ebe41c32_content_list.json +1736 -0
- 2024/Adjusting Interpretable Dimensions in Embedding Space with Human Judgments/3179a1d7-f1eb-4b22-bb06-32f6ebe41c32_model.json +0 -0
- 2024/Adjusting Interpretable Dimensions in Embedding Space with Human Judgments/3179a1d7-f1eb-4b22-bb06-32f6ebe41c32_origin.pdf +3 -0
- 2024/Adjusting Interpretable Dimensions in Embedding Space with Human Judgments/full.md +307 -0
- 2024/Adjusting Interpretable Dimensions in Embedding Space with Human Judgments/images.zip +3 -0
- 2024/Adjusting Interpretable Dimensions in Embedding Space with Human Judgments/layout.json +0 -0
- 2024/Advancing Beyond Identification_ Multi-bit Watermark for Large Language Models/929ac009-c5be-48ee-9603-18d937e312c6_content_list.json +0 -0
- 2024/Advancing Beyond Identification_ Multi-bit Watermark for Large Language Models/929ac009-c5be-48ee-9603-18d937e312c6_model.json +0 -0
- 2024/Advancing Beyond Identification_ Multi-bit Watermark for Large Language Models/929ac009-c5be-48ee-9603-18d937e312c6_origin.pdf +3 -0
- 2024/Advancing Beyond Identification_ Multi-bit Watermark for Large Language Models/full.md +706 -0
- 2024/Advancing Beyond Identification_ Multi-bit Watermark for Large Language Models/images.zip +3 -0
- 2024/Advancing Beyond Identification_ Multi-bit Watermark for Large Language Models/layout.json +0 -0
- 2024/AfriMTE and AfriCOMET_ Enhancing COMET to Embrace Under-resourced African Languages/ea381c68-7b23-4741-a516-c5b3e5daf8a5_content_list.json +0 -0
.gitattributes
CHANGED
|
@@ -1573,3 +1573,67 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 1573 |
2024/i-Code[[:space:]]V2_[[:space:]]An[[:space:]]Autoregressive[[:space:]]Generation[[:space:]]Framework[[:space:]]over[[:space:]]Vision,[[:space:]]Language,[[:space:]]and[[:space:]]Speech[[:space:]]Data/8827dfda-97f2-4efc-9a81-adfd59822051_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1574 |
2024/mOthello_[[:space:]]When[[:space:]]Do[[:space:]]Cross-Lingual[[:space:]]Representation[[:space:]]Alignment[[:space:]]and[[:space:]]Cross-Lingual[[:space:]]Transfer[[:space:]]Emerge[[:space:]]in[[:space:]]Multilingual[[:space:]]Models_/2dfaac5c-a455-474b-bd0a-2c8a8afff933_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1575 |
2024/“Tell[[:space:]]me[[:space:]]who[[:space:]]you[[:space:]]are[[:space:]]and[[:space:]]I[[:space:]]tell[[:space:]]you[[:space:]]how[[:space:]]you[[:space:]]argue”_[[:space:]]Predicting[[:space:]]Stances[[:space:]]and[[:space:]]Arguments[[:space:]]for[[:space:]]Stakeholder[[:space:]]Groups/18cf9790-d8a5-43fe-b1d5-81f2f34255e6_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1573 |
2024/i-Code[[:space:]]V2_[[:space:]]An[[:space:]]Autoregressive[[:space:]]Generation[[:space:]]Framework[[:space:]]over[[:space:]]Vision,[[:space:]]Language,[[:space:]]and[[:space:]]Speech[[:space:]]Data/8827dfda-97f2-4efc-9a81-adfd59822051_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1574 |
2024/mOthello_[[:space:]]When[[:space:]]Do[[:space:]]Cross-Lingual[[:space:]]Representation[[:space:]]Alignment[[:space:]]and[[:space:]]Cross-Lingual[[:space:]]Transfer[[:space:]]Emerge[[:space:]]in[[:space:]]Multilingual[[:space:]]Models_/2dfaac5c-a455-474b-bd0a-2c8a8afff933_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1575 |
2024/“Tell[[:space:]]me[[:space:]]who[[:space:]]you[[:space:]]are[[:space:]]and[[:space:]]I[[:space:]]tell[[:space:]]you[[:space:]]how[[:space:]]you[[:space:]]argue”_[[:space:]]Predicting[[:space:]]Stances[[:space:]]and[[:space:]]Arguments[[:space:]]for[[:space:]]Stakeholder[[:space:]]Groups/18cf9790-d8a5-43fe-b1d5-81f2f34255e6_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1576 |
+
2024/AceGPT,[[:space:]]Localizing[[:space:]]Large[[:space:]]Language[[:space:]]Models[[:space:]]in[[:space:]]Arabic/b23f89dd-257d-44d2-8db7-563036341ed5_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1577 |
+
2024/Actively[[:space:]]Learn[[:space:]]from[[:space:]]LLMs[[:space:]]with[[:space:]]Uncertainty[[:space:]]Propagation[[:space:]]for[[:space:]]Generalized[[:space:]]Category[[:space:]]Discovery/3e184059-006f-40ea-abc5-8e4f9a6e51eb_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1578 |
+
2024/Ada-LEval_[[:space:]]Evaluating[[:space:]]long-context[[:space:]]LLMs[[:space:]]with[[:space:]]length-adaptable[[:space:]]benchmarks/8df20923-51bb-4d14-9407-d091f93cc639_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1579 |
+
2024/Adaptive[[:space:]]Cross-lingual[[:space:]]Text[[:space:]]Classification[[:space:]]through[[:space:]]In-Context[[:space:]]One-Shot[[:space:]]Demonstrations/ae152a93-ecfb-4a6c-b85d-f885c8997f05_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1580 |
+
2024/Adaptive[[:space:]]Rank[[:space:]]Selections[[:space:]]for[[:space:]]Low-Rank[[:space:]]Approximation[[:space:]]of[[:space:]]Language[[:space:]]Models/d9c0afc9-16d2-4fb1-992a-9475f26df5e4_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1581 |
+
2024/Adaptive-RAG_[[:space:]]Learning[[:space:]]to[[:space:]]Adapt[[:space:]]Retrieval-Augmented[[:space:]]Large[[:space:]]Language[[:space:]]Models[[:space:]]through[[:space:]]Question[[:space:]]Complexity/59b18015-e896-4681-bd77-ccb145a03e89_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1582 |
+
2024/Adjusting[[:space:]]Interpretable[[:space:]]Dimensions[[:space:]]in[[:space:]]Embedding[[:space:]]Space[[:space:]]with[[:space:]]Human[[:space:]]Judgments/3179a1d7-f1eb-4b22-bb06-32f6ebe41c32_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1583 |
+
2024/Advancing[[:space:]]Beyond[[:space:]]Identification_[[:space:]]Multi-bit[[:space:]]Watermark[[:space:]]for[[:space:]]Large[[:space:]]Language[[:space:]]Models/929ac009-c5be-48ee-9603-18d937e312c6_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1584 |
+
2024/AfriMTE[[:space:]]and[[:space:]]AfriCOMET_[[:space:]]Enhancing[[:space:]]COMET[[:space:]]to[[:space:]]Embrace[[:space:]]Under-resourced[[:space:]]African[[:space:]]Languages/ea381c68-7b23-4741-a516-c5b3e5daf8a5_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1585 |
+
2024/Aligning[[:space:]]as[[:space:]]Debiasing_[[:space:]]Causality-Aware[[:space:]]Alignment[[:space:]]via[[:space:]]Reinforcement[[:space:]]Learning[[:space:]]with[[:space:]]Interventional[[:space:]]Feedback/b14abbe0-b3e4-419c-bac9-7c0b23f9d4de_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1586 |
+
2024/An[[:space:]]Empirical[[:space:]]Study[[:space:]]of[[:space:]]Consistency[[:space:]]Regularization[[:space:]]for[[:space:]]End-to-End[[:space:]]Speech-to-Text[[:space:]]Translation/228a6eee-a6ef-4560-a0ef-3d73620a2d23_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1587 |
+
2024/An[[:space:]]Examination[[:space:]]of[[:space:]]the[[:space:]]Compositionality[[:space:]]of[[:space:]]Large[[:space:]]Generative[[:space:]]Vision-Language[[:space:]]Models/00d412a7-795e-406d-ad88-7dc784902a88_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1588 |
+
2024/An[[:space:]]Interactive[[:space:]]Framework[[:space:]]for[[:space:]]Profiling[[:space:]]News[[:space:]]Media[[:space:]]Sources/e9515491-3c63-42e5-b337-bca228214a9a_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1589 |
+
2024/Analysis[[:space:]]of[[:space:]]State-Level[[:space:]]Legislative[[:space:]]Process[[:space:]]in[[:space:]]Enhanced[[:space:]]Linguistic[[:space:]]and[[:space:]]Nationwide[[:space:]]Network[[:space:]]Contexts/e633028d-e21b-4e6a-8373-f86ffc1e5965_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1590 |
+
2024/Analyzing[[:space:]]the[[:space:]]Role[[:space:]]of[[:space:]]Semantic[[:space:]]Representations[[:space:]]in[[:space:]]the[[:space:]]Era[[:space:]]of[[:space:]]Large[[:space:]]Language[[:space:]]Models/b4926404-ed25-4a2e-b76c-14db5fcc1162_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1591 |
+
2024/Analyzing[[:space:]]the[[:space:]]Use[[:space:]]of[[:space:]]Metaphors[[:space:]]in[[:space:]]News[[:space:]]Editorials[[:space:]]for[[:space:]]Political[[:space:]]Framing/1a3a27a0-b406-4577-979f-3f1e660f41bf_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1592 |
+
2024/AnchorAL_[[:space:]]Computationally[[:space:]]Efficient[[:space:]]Active[[:space:]]Learning[[:space:]]for[[:space:]]Large[[:space:]]and[[:space:]]Imbalanced[[:space:]]Datasets/77cc0555-a8d4-478b-baa6-be6dc81ef545_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1593 |
+
2024/Anisotropy[[:space:]]is[[:space:]]Not[[:space:]]Inherent[[:space:]]to[[:space:]]Transformers/0506ff5a-6b64-4d00-bab6-e2c7a0ee4eb7_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1594 |
+
2024/Are[[:space:]]Large[[:space:]]Language[[:space:]]Model[[:space:]]Temporally[[:space:]]Grounded_/a4d36438-4475-414d-8826-99aa24bdd5da_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1595 |
+
2024/Are[[:space:]]Multilingual[[:space:]]LLMs[[:space:]]Culturally-Diverse[[:space:]]Reasoners_[[:space:]]An[[:space:]]Investigation[[:space:]]into[[:space:]]Multicultural[[:space:]]Proverbs[[:space:]]and[[:space:]]Sayings/bb1ec5bc-5072-4917-9c1c-e5f2604b9ccd_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1596 |
+
2024/Assessing[[:space:]]Factual[[:space:]]Reliability[[:space:]]of[[:space:]]Large[[:space:]]Language[[:space:]]Model[[:space:]]Knowledge/d991b5da-20eb-4c83-b83a-bd38e235e800_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1597 |
+
2024/Assessing[[:space:]]Logical[[:space:]]Puzzle[[:space:]]Solving[[:space:]]in[[:space:]]Large[[:space:]]Language[[:space:]]Models_[[:space:]]Insights[[:space:]]from[[:space:]]a[[:space:]]Minesweeper[[:space:]]Case[[:space:]]Study/43423d9a-f8aa-43c6-945c-d8b09b04c3ab_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1598 |
+
2024/Assisting[[:space:]]in[[:space:]]Writing[[:space:]]Wikipedia-like[[:space:]]Articles[[:space:]]From[[:space:]]Scratch[[:space:]]with[[:space:]]Large[[:space:]]Language[[:space:]]Models/0b812c8e-ff56-46a0-b9c0-1378a68a3bdc_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1599 |
+
2024/Attacks,[[:space:]]Defenses[[:space:]]and[[:space:]]Evaluations[[:space:]]for[[:space:]]LLM[[:space:]]Conversation[[:space:]]Safety_[[:space:]]A[[:space:]]Survey/88c9a271-2690-4aed-a179-25e1ea17fbc3_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1600 |
+
2024/AudioChatLlama_[[:space:]]Towards[[:space:]]General-Purpose[[:space:]]Speech[[:space:]]Abilities[[:space:]]for[[:space:]]LLMs/75e65c6f-0300-4543-b1b5-2419384cf7f9_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1601 |
+
2024/AutoLoRA_[[:space:]]Automatically[[:space:]]Tuning[[:space:]]Matrix[[:space:]]Ranks[[:space:]]in[[:space:]]Low-Rank[[:space:]]Adaptation[[:space:]]Based[[:space:]]on[[:space:]]Meta[[:space:]]Learning/e5327c7d-bc24-4f0b-8623-732c71ad42f1_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1602 |
+
2024/AutoPRM_[[:space:]]Automating[[:space:]]Procedural[[:space:]]Supervision[[:space:]]for[[:space:]]Multi-Step[[:space:]]Reasoning[[:space:]]via[[:space:]]Controllable[[:space:]]Question[[:space:]]Decomposition/f578463e-194c-4a67-9e2d-a1d66b301df2_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1603 |
+
2024/Automatic[[:space:]]Generation[[:space:]]of[[:space:]]Model[[:space:]]and[[:space:]]Data[[:space:]]Cards_[[:space:]]A[[:space:]]Step[[:space:]]Towards[[:space:]]Responsible[[:space:]]AI/f0bf1ae0-cb9f-4f03-86e4-45e7502dca3e_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1604 |
+
2024/Automatic[[:space:]]Restoration[[:space:]]of[[:space:]]Diacritics[[:space:]]for[[:space:]]Speech[[:space:]]Data[[:space:]]Sets/20499796-6963-48a0-ac48-6d8d0d7e36ac_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1605 |
+
2024/Automatic,[[:space:]]Meta[[:space:]]and[[:space:]]Human[[:space:]]Evaluation[[:space:]]for[[:space:]]Multimodal[[:space:]]Summarization[[:space:]]with[[:space:]]Multimodal[[:space:]]Output/a153bb25-c717-4a46-8016-6b2575128981_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1606 |
+
2024/BPE-knockout_[[:space:]]Pruning[[:space:]]Pre-existing[[:space:]]BPE[[:space:]]Tokenisers[[:space:]]with[[:space:]]Backwards-compatible[[:space:]]Morphological[[:space:]]Semi-supervision/499f9928-a03b-41c0-8581-8371beeb8d79_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1607 |
+
2024/BUFFET_[[:space:]]Benchmarking[[:space:]]Large[[:space:]]Language[[:space:]]Models[[:space:]]for[[:space:]]Few-shot[[:space:]]Cross-lingual[[:space:]]Transfer/22c471fb-7a4f-4cde-9165-a90ea682d298_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1608 |
+
2024/BUST_[[:space:]]Benchmark[[:space:]]for[[:space:]]the[[:space:]]evaluation[[:space:]]of[[:space:]]detectors[[:space:]]of[[:space:]]LLM-Generated[[:space:]]Text/44685c98-0d65-4bac-a75c-d620659f4ef5_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1609 |
+
2024/Backdoor[[:space:]]Attacks[[:space:]]on[[:space:]]Multilingual[[:space:]]Machine[[:space:]]Translation/d115d941-bb73-4aff-84d1-8a83826af731_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1610 |
+
2024/Backdooring[[:space:]]Instruction-Tuned[[:space:]]Large[[:space:]]Language[[:space:]]Models[[:space:]]with[[:space:]]Virtual[[:space:]]Prompt[[:space:]]Injection/5f8f25b1-dfa9-49f5-adcc-e6c709c46944_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1611 |
+
2024/BeLLM_[[:space:]]Backward[[:space:]]Dependency[[:space:]]Enhanced[[:space:]]Large[[:space:]]Language[[:space:]]Model[[:space:]]for[[:space:]]Sentence[[:space:]]Embeddings/79f0576b-12e1-42a7-8ecb-273041df66c9_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1612 |
+
2024/Benchmark[[:space:]]Transparency_[[:space:]]Measuring[[:space:]]the[[:space:]]Impact[[:space:]]of[[:space:]]Data[[:space:]]on[[:space:]]Evaluation/58ee3708-a724-4139-92ad-1415308e57f0_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1613 |
+
2024/Better[[:space:]]Zero-Shot[[:space:]]Reasoning[[:space:]]with[[:space:]]Role-Play[[:space:]]Prompting/d21a50d0-b955-4059-817f-5f0e59425595_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1614 |
+
2024/Beyond[[:space:]]Borders_[[:space:]]Investigating[[:space:]]Cross-Jurisdiction[[:space:]]Transfer[[:space:]]in[[:space:]]Legal[[:space:]]Case[[:space:]]Summarization/ca1e925b-1867-45c0-8833-e5382aa13664_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1615 |
+
2024/Beyond[[:space:]]Performance_[[:space:]]Quantifying[[:space:]]and[[:space:]]Mitigating[[:space:]]Label[[:space:]]Bias[[:space:]]in[[:space:]]LLMs/b5e4a856-89ab-4647-a659-5f008557ad31_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1616 |
+
2024/BookSQL_[[:space:]]A[[:space:]]Large[[:space:]]Scale[[:space:]]Text-to-SQL[[:space:]]Dataset[[:space:]]for[[:space:]]Accounting[[:space:]]Domain/6fe40a80-3a23-4ef2-8d8a-6835667d2c12_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1617 |
+
2024/Branch-Solve-Merge[[:space:]]Improves[[:space:]]Large[[:space:]]Language[[:space:]]Model[[:space:]]Evaluation[[:space:]]and[[:space:]]Generation/bb77f1e4-1dfd-481c-a44c-825777bc69c7_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1618 |
+
2024/Bridging[[:space:]]the[[:space:]]Gap[[:space:]]between[[:space:]]Different[[:space:]]Vocabularies[[:space:]]for[[:space:]]LLM[[:space:]]Ensemble/ee9038f2-5a62-4e88-a36d-a1b4df1cf0b3_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1619 |
+
2024/Bridging[[:space:]]the[[:space:]]Novice-Expert[[:space:]]Gap[[:space:]]via[[:space:]]Models[[:space:]]of[[:space:]]Decision-Making_[[:space:]]A[[:space:]]Case[[:space:]]Study[[:space:]]on[[:space:]]Remediating[[:space:]]Math[[:space:]]Mistakes/edddfb58-72a8-4772-b7df-28022585990a_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1620 |
+
2024/Building[[:space:]]Knowledge-Guided[[:space:]]Lexica[[:space:]]to[[:space:]]Model[[:space:]]Cultural[[:space:]]Variation/6ae8deaa-2947-4954-bc7c-0b0770b998de_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1621 |
+
2024/CASA_[[:space:]]Causality-driven[[:space:]]Argument[[:space:]]Sufficiency[[:space:]]Assessment/0fea7294-5ac1-4715-92fb-e67fea2cddfa_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1622 |
+
2024/CCSum_[[:space:]]A[[:space:]]Large-Scale[[:space:]]and[[:space:]]High-Quality[[:space:]]Dataset[[:space:]]for[[:space:]]Abstractive[[:space:]]News[[:space:]]Summarization/46811142-f569-4f24-b8fe-c80d7f721aff_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1623 |
+
2024/CERET_[[:space:]]Cost-Effective[[:space:]]Extrinsic[[:space:]]Refinement[[:space:]]for[[:space:]]Text[[:space:]]Generation/67394dcb-00e2-4669-8df5-6cdab2870509_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1624 |
+
2024/CMB_[[:space:]]A[[:space:]]Comprehensive[[:space:]]Medical[[:space:]]Benchmark[[:space:]]in[[:space:]]Chinese/6d369eef-9f59-4387-a337-6c6143492534_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1625 |
+
2024/CNER_[[:space:]]Concept[[:space:]]and[[:space:]]Named[[:space:]]Entity[[:space:]]Recognition/c8e945e9-8d07-4d20-86c1-06c377de8c01_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1626 |
+
2024/CONSCENDI_[[:space:]]A[[:space:]]Contrastive[[:space:]]and[[:space:]]Scenario-Guided[[:space:]]Distillation[[:space:]]Approach[[:space:]]to[[:space:]]Guardrail[[:space:]]Models[[:space:]]for[[:space:]]Virtual[[:space:]]Assistants/b3b0d298-ade6-4e44-a0f8-deba2306b26e_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1627 |
+
2024/COPAL-ID_[[:space:]]Indonesian[[:space:]]Language[[:space:]]Reasoning[[:space:]]with[[:space:]]Local[[:space:]]Culture[[:space:]]and[[:space:]]Nuances/b9804dfc-1eb4-4d04-9fa2-e4cb849890b7_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1628 |
+
2024/COSIGN_[[:space:]]Contextual[[:space:]]Facts[[:space:]]Guided[[:space:]]Generation[[:space:]]for[[:space:]]Knowledge[[:space:]]Graph[[:space:]]Completion/1c5aadd9-fd31-420d-ac3d-2c7ed8d6a727_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1629 |
+
2024/Can[[:space:]]Knowledge[[:space:]]Graphs[[:space:]]Reduce[[:space:]]Hallucinations[[:space:]]in[[:space:]]LLMs_[[:space:]]_[[:space:]]A[[:space:]]Survey/84fb5aa2-3d5b-411d-a7bf-464215db5552_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1630 |
+
2024/Can[[:space:]]Language[[:space:]]Model[[:space:]]Moderators[[:space:]]Improve[[:space:]]the[[:space:]]Health[[:space:]]of[[:space:]]Online[[:space:]]Discourse_/6627a5f5-7eaa-47dc-8fbd-a1dfe2284c16_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1631 |
+
2024/Capturing[[:space:]]Perspectives[[:space:]]of[[:space:]]Crowdsourced[[:space:]]Annotators[[:space:]]in[[:space:]]Subjective[[:space:]]Learning[[:space:]]Tasks/0a5dfcfc-db7e-4461-8e6a-66157cb5e5c1_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1632 |
+
2024/Carpe[[:space:]]diem_[[:space:]]On[[:space:]]the[[:space:]]Evaluation[[:space:]]of[[:space:]]World[[:space:]]Knowledge[[:space:]]in[[:space:]]Lifelong[[:space:]]Language[[:space:]]Models/0819c476-6f1c-44b5-8bf8-73e4fc1a9e72_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1633 |
+
2024/Causal[[:space:]]Inference[[:space:]]for[[:space:]]Human-Language[[:space:]]Model[[:space:]]Collaboration/5a10a1ae-c52b-44aa-9ea0-1ade1ad60fbb_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1634 |
+
2024/ChatGPT[[:space:]]as[[:space:]]an[[:space:]]Attack[[:space:]]Tool_[[:space:]]Stealthy[[:space:]]Textual[[:space:]]Backdoor[[:space:]]Attack[[:space:]]via[[:space:]]Blackbox[[:space:]]Generative[[:space:]]Model[[:space:]]Trigger/39d66615-4cfa-490c-9c0c-627e7c050787_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1635 |
+
2024/CoE-SQL_[[:space:]]In-Context[[:space:]]Learning[[:space:]]for[[:space:]]Multi-Turn[[:space:]]Text-to-SQL[[:space:]]with[[:space:]]Chain-of-Editions/c337ca05-53cb-4788-b1cc-c388f9d71422_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1636 |
+
2024/CoUDA_[[:space:]]Coherence[[:space:]]Evaluation[[:space:]]via[[:space:]]Unified[[:space:]]Data[[:space:]]Augmentation/3d7c4385-f409-419e-8ea7-7f0e9e6c2a1a_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1637 |
+
2024/Code[[:space:]]Models[[:space:]]are[[:space:]]Zero-shot[[:space:]]Precondition[[:space:]]Reasoners/cdd4f820-1bc2-480b-8f4f-3d8c136c3e9c_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1638 |
+
2024/ComCLIP_[[:space:]]Training-Free[[:space:]]Compositional[[:space:]]Image[[:space:]]and[[:space:]]Text[[:space:]]Matching/c63df332-8222-4957-9bdf-76cca383fc2f_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1639 |
+
2024/Comparing[[:space:]]Explanation[[:space:]]Faithfulness[[:space:]]between[[:space:]]Multilingual[[:space:]]and[[:space:]]Monolingual[[:space:]]Fine-tuned[[:space:]]Language[[:space:]]Models/bf88b5c3-95c1-4885-a78d-b7bf32a23fc7_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
2024/AceGPT, Localizing Large Language Models in Arabic/b23f89dd-257d-44d2-8db7-563036341ed5_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/AceGPT, Localizing Large Language Models in Arabic/b23f89dd-257d-44d2-8db7-563036341ed5_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/AceGPT, Localizing Large Language Models in Arabic/b23f89dd-257d-44d2-8db7-563036341ed5_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:02a2ac9891934597b60809a55a19b5595e557f9c698e4e4f90e15a2b70959cb9
|
| 3 |
+
size 685984
|
2024/AceGPT, Localizing Large Language Models in Arabic/full.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/AceGPT, Localizing Large Language Models in Arabic/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0747aba24cc40885b858249f5daaf125479d3544edeb0c6e848fe12094879036
|
| 3 |
+
size 1521620
|
2024/AceGPT, Localizing Large Language Models in Arabic/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/Actively Learn from LLMs with Uncertainty Propagation for Generalized Category Discovery/3e184059-006f-40ea-abc5-8e4f9a6e51eb_content_list.json
ADDED
|
@@ -0,0 +1,2194 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"type": "text",
|
| 4 |
+
"text": "Actively Learn from LLMs with Uncertainty Propagation for Generalized Category Discovery",
|
| 5 |
+
"text_level": 1,
|
| 6 |
+
"bbox": [
|
| 7 |
+
117,
|
| 8 |
+
90,
|
| 9 |
+
878,
|
| 10 |
+
130
|
| 11 |
+
],
|
| 12 |
+
"page_idx": 0
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"text": "Jinggui Liang $^{1}$ , Lizi Liao $^{1}$ , Hao Fei $^{2}$ , Bobo Li $^{3}$ , Jing Jiang $^{1}$",
|
| 17 |
+
"bbox": [
|
| 18 |
+
226,
|
| 19 |
+
152,
|
| 20 |
+
771,
|
| 21 |
+
170
|
| 22 |
+
],
|
| 23 |
+
"page_idx": 0
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"text": "$^{1}$ Singapore Management University, $^{2}$ National University of Singapore, $^{3}$ Wuhan University jg.liang.2023@phdcs.smu.edu.sg lzliao@smu.edu.sg haofei37@nus.edu.sg boboli@whu.edu.cn jingjiang@smu.edu.sg",
|
| 28 |
+
"bbox": [
|
| 29 |
+
131,
|
| 30 |
+
170,
|
| 31 |
+
870,
|
| 32 |
+
221
|
| 33 |
+
],
|
| 34 |
+
"page_idx": 0
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "text",
|
| 38 |
+
"text": "Abstract",
|
| 39 |
+
"text_level": 1,
|
| 40 |
+
"bbox": [
|
| 41 |
+
260,
|
| 42 |
+
262,
|
| 43 |
+
339,
|
| 44 |
+
277
|
| 45 |
+
],
|
| 46 |
+
"page_idx": 0
|
| 47 |
+
},
|
| 48 |
+
{
|
| 49 |
+
"type": "text",
|
| 50 |
+
"text": "Generalized category discovery faces a key issue: the lack of supervision for new and unseen data categories. Traditional methods typically combine supervised pretraining with self-supervised learning to create models, and then employ clustering for category identification. However, these approaches tend to become overly tailored to known categories, failing to fully resolve the core issue. Hence, we propose to integrate the feedback from LLMs into an active learning paradigm. Specifically, our method innovatively employs uncertainty propagation to select data samples from high-uncertainty regions, which are then labeled using LLMs through a comparison-based prompting scheme. This not only eases the labeling task but also enhances accuracy in identifying new categories. Additionally, a soft feedback propagation mechanism is introduced to minimize the spread of inaccurate feedback. Experiments on various datasets demonstrate our framework's efficacy and generalizability, significantly improving baseline models at a nominal average cost.",
|
| 51 |
+
"bbox": [
|
| 52 |
+
142,
|
| 53 |
+
288,
|
| 54 |
+
460,
|
| 55 |
+
630
|
| 56 |
+
],
|
| 57 |
+
"page_idx": 0
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"type": "text",
|
| 61 |
+
"text": "1 Introduction",
|
| 62 |
+
"text_level": 1,
|
| 63 |
+
"bbox": [
|
| 64 |
+
115,
|
| 65 |
+
640,
|
| 66 |
+
260,
|
| 67 |
+
656
|
| 68 |
+
],
|
| 69 |
+
"page_idx": 0
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"type": "text",
|
| 73 |
+
"text": "Generalized Category Discovery (GCD) is a crucial task in open-world computing (Lin et al., 2020; Zhang et al., 2021b), where the goal is to automate the classification of partially labeled data. It uniquely challenges systems to not only recognize predefined categories but also to discover entirely new categories from a mix of labeled and unlabeled data (Yang et al., 2021; Zeng et al., 2022). This task mirrors the dynamic and evolving nature of real-world data, where new categories frequently emerge, necessitating models that can adapt and learn continually.",
|
| 74 |
+
"bbox": [
|
| 75 |
+
114,
|
| 76 |
+
665,
|
| 77 |
+
489,
|
| 78 |
+
859
|
| 79 |
+
],
|
| 80 |
+
"page_idx": 0
|
| 81 |
+
},
|
| 82 |
+
{
|
| 83 |
+
"type": "image",
|
| 84 |
+
"img_path": "images/b2ca4dd5abd7da63fd281db6b27d8b47b92735f2bc370c27a54eb32915d2be5a.jpg",
|
| 85 |
+
"image_caption": [
|
| 86 |
+
"Figure 1: The active learning loop with propagated LLM feedback for model training."
|
| 87 |
+
],
|
| 88 |
+
"image_footnote": [],
|
| 89 |
+
"bbox": [
|
| 90 |
+
515,
|
| 91 |
+
262,
|
| 92 |
+
873,
|
| 93 |
+
438
|
| 94 |
+
],
|
| 95 |
+
"page_idx": 0
|
| 96 |
+
},
|
| 97 |
+
{
|
| 98 |
+
"type": "text",
|
| 99 |
+
"text": "In traditional GCD methods, the initial step often involves supervised pretraining on a labeled dataset to establish a foundational understanding of known categories (Zhong et al., 2021; Vaze et al., 2022). This is followed by self-supervised learning on unlabeled data or even contrastive learning, allowing the model to extract and learn patterns without explicit category labels (An et al., 2023). The final stage typically employs clustering techniques, like K-Means (MacQueen et al., 1967), to group similar data points, aiming to identify categories. However, this sequential process tends to imprint a bias towards the initially learned, known categories, thus limiting the model's ability to generalize to new, unseen categories (Mou et al., 2022). Such overfitting to familiar data restricts the scope of GCD, preventing it from fully embracing the open-world setting it is intended for.",
|
| 100 |
+
"bbox": [
|
| 101 |
+
507,
|
| 102 |
+
508,
|
| 103 |
+
882,
|
| 104 |
+
799
|
| 105 |
+
],
|
| 106 |
+
"page_idx": 0
|
| 107 |
+
},
|
| 108 |
+
{
|
| 109 |
+
"type": "text",
|
| 110 |
+
"text": "Recently, Large Language Models (LLMs) such as GPT-4 (OpenAI, 2023), PaLM (Chowdhery et al., 2023), and LLaMA (Touvron et al., 2023) have shown extraordinary versatility across a broad range of NLP tasks, providing good quality super",
|
| 111 |
+
"bbox": [
|
| 112 |
+
507,
|
| 113 |
+
801,
|
| 114 |
+
880,
|
| 115 |
+
882
|
| 116 |
+
],
|
| 117 |
+
"page_idx": 0
|
| 118 |
+
},
|
| 119 |
+
{
|
| 120 |
+
"type": "page_footnote",
|
| 121 |
+
"text": "<https://github.com/liangjinggui/ALUP>",
|
| 122 |
+
"bbox": [
|
| 123 |
+
137,
|
| 124 |
+
866,
|
| 125 |
+
418,
|
| 126 |
+
881
|
| 127 |
+
],
|
| 128 |
+
"page_idx": 0
|
| 129 |
+
},
|
| 130 |
+
{
|
| 131 |
+
"type": "page_number",
|
| 132 |
+
"text": "7845",
|
| 133 |
+
"bbox": [
|
| 134 |
+
480,
|
| 135 |
+
927,
|
| 136 |
+
519,
|
| 137 |
+
940
|
| 138 |
+
],
|
| 139 |
+
"page_idx": 0
|
| 140 |
+
},
|
| 141 |
+
{
|
| 142 |
+
"type": "footer",
|
| 143 |
+
"text": "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 7845-7858 June 16-21, 2024 ©2024 Association for Computational Linguistics",
|
| 144 |
+
"bbox": [
|
| 145 |
+
139,
|
| 146 |
+
945,
|
| 147 |
+
857,
|
| 148 |
+
984
|
| 149 |
+
],
|
| 150 |
+
"page_idx": 0
|
| 151 |
+
},
|
| 152 |
+
{
|
| 153 |
+
"type": "text",
|
| 154 |
+
"text": "vision signals for summarization (Liu et al., 2023), clustering (Zhang et al., 2023c), etc. Their ability to understand and generate nuanced language patterns makes them promising for supplementing the supervision of new categories in GCD. However, the direct application of LLMs in GCD, which typically involves processing and clustering thousands of samples, raises substantial challenges. The intensive computational demands of LLMs could lead to issues with data privacy, high latency, and increased costs, which are particularly problematic in large-scale GCD scenarios.",
|
| 155 |
+
"bbox": [
|
| 156 |
+
114,
|
| 157 |
+
85,
|
| 158 |
+
489,
|
| 159 |
+
278
|
| 160 |
+
],
|
| 161 |
+
"page_idx": 1
|
| 162 |
+
},
|
| 163 |
+
{
|
| 164 |
+
"type": "text",
|
| 165 |
+
"text": "To circumvent the above challenges, integrating LLMs into an active learning framework presents a practical and efficient solution. This approach entails selectively using LLMs to provide supervision signals, especially in cases where the data is most uncertain or the categories are novel. However, this integration brings forth new challenges: optimizing the use of LLMs to ensure cost and time efficiency, and critically, ensuring the reliability of the feedback provided by LLMs. Effective strategies are needed to mitigate the risk of propagating incorrect feedback from LLMs.",
|
| 166 |
+
"bbox": [
|
| 167 |
+
114,
|
| 168 |
+
282,
|
| 169 |
+
489,
|
| 170 |
+
475
|
| 171 |
+
],
|
| 172 |
+
"page_idx": 1
|
| 173 |
+
},
|
| 174 |
+
{
|
| 175 |
+
"type": "text",
|
| 176 |
+
"text": "Addressing these challenges, we propose a novel framework for GCD to Actively Learn from LLMs with Uncertainty Propagation, termed as ALUP. As shown in Figure 1, we begin by employing an uncertainty propagation strategy, which systematically identifies data samples in regions of high uncertainty – these are the areas where the model is least confident and, therefore, where LLM input could be most beneficial. The selected samples are then labeled using LLMs through a sophisticated comparison-based prompting technique. This method leverages the comparative strength of LLMs, making it easier for them to provide accurate feedback, especially for new and complex categories. To further enhance our approach, we incorporate a soft label propagation mechanism. This mechanism carefully extends the LLMs-generated feedback to similar, neighboring samples, effectively amplifying the value of each LLM query while minimizing the risk of propagating errors. Rigorous testing on diverse datasets has shown that our method not only significantly improves upon existing baseline models but also does so with a nominal increase in cost, offering a scalable, efficient, and effective solution for the",
|
| 177 |
+
"bbox": [
|
| 178 |
+
114,
|
| 179 |
+
480,
|
| 180 |
+
489,
|
| 181 |
+
882
|
| 182 |
+
],
|
| 183 |
+
"page_idx": 1
|
| 184 |
+
},
|
| 185 |
+
{
|
| 186 |
+
"type": "text",
|
| 187 |
+
"text": "intricate problem of GCD.",
|
| 188 |
+
"bbox": [
|
| 189 |
+
509,
|
| 190 |
+
85,
|
| 191 |
+
709,
|
| 192 |
+
101
|
| 193 |
+
],
|
| 194 |
+
"page_idx": 1
|
| 195 |
+
},
|
| 196 |
+
{
|
| 197 |
+
"type": "text",
|
| 198 |
+
"text": "The main contributions of this work can be summarized as follows:",
|
| 199 |
+
"bbox": [
|
| 200 |
+
507,
|
| 201 |
+
101,
|
| 202 |
+
882,
|
| 203 |
+
133
|
| 204 |
+
],
|
| 205 |
+
"page_idx": 1
|
| 206 |
+
},
|
| 207 |
+
{
|
| 208 |
+
"type": "list",
|
| 209 |
+
"sub_type": "text",
|
| 210 |
+
"list_items": [
|
| 211 |
+
"- We developed an innovative active learning framework integrating LLMs' feedback for GCD, addressing the challenge of limited supervision for new data categories.",
|
| 212 |
+
"- We combined uncertainty-region based data selection and comparison-based LLMs prompting, significantly enhancing GCD accuracy and efficiency with soft propagation.",
|
| 213 |
+
"- Experiments demonstrated marked improvements over traditional GCD methods across diverse datasets, affirming the ALUP's effectiveness and resource efficiency."
|
| 214 |
+
],
|
| 215 |
+
"bbox": [
|
| 216 |
+
531,
|
| 217 |
+
143,
|
| 218 |
+
884,
|
| 219 |
+
350
|
| 220 |
+
],
|
| 221 |
+
"page_idx": 1
|
| 222 |
+
},
|
| 223 |
+
{
|
| 224 |
+
"type": "text",
|
| 225 |
+
"text": "2 Related Work",
|
| 226 |
+
"text_level": 1,
|
| 227 |
+
"bbox": [
|
| 228 |
+
509,
|
| 229 |
+
361,
|
| 230 |
+
665,
|
| 231 |
+
376
|
| 232 |
+
],
|
| 233 |
+
"page_idx": 1
|
| 234 |
+
},
|
| 235 |
+
{
|
| 236 |
+
"type": "text",
|
| 237 |
+
"text": "2.1 Generalized Category Discovery",
|
| 238 |
+
"text_level": 1,
|
| 239 |
+
"bbox": [
|
| 240 |
+
507,
|
| 241 |
+
386,
|
| 242 |
+
808,
|
| 243 |
+
401
|
| 244 |
+
],
|
| 245 |
+
"page_idx": 1
|
| 246 |
+
},
|
| 247 |
+
{
|
| 248 |
+
"type": "text",
|
| 249 |
+
"text": "Unsupervised Methods: The realm of GCD has been fundamentally shaped by unsupervised methods, focusing on learning cluster-friendly representations. These early methods (Xie et al., 2016; Yang et al., 2017; Padmasundari and Bangalore, 2018; Caron et al., 2018; Hadifar et al., 2019) laid the groundwork by using unsupervised clustering algorithms to group samples based on inherent similarities. Recent advancements, particularly with the emergence of LLMs, have brought a paradigm shift. The integration of LLMs in unsupervised GCD (De Raedt et al., 2023; Zhang et al., 2023c; Viswanathan et al., 2023) represents a novel direction, pushing the boundaries of category identification beyond traditional clustering techniques.",
|
| 250 |
+
"bbox": [
|
| 251 |
+
507,
|
| 252 |
+
407,
|
| 253 |
+
882,
|
| 254 |
+
649
|
| 255 |
+
],
|
| 256 |
+
"page_idx": 1
|
| 257 |
+
},
|
| 258 |
+
{
|
| 259 |
+
"type": "text",
|
| 260 |
+
"text": "Semi-Supervised Methods: In contrast, semi-supervised GCD methods blend limited labeled data with possibly larger unlabeled data to enhance category discovery (Hsu et al., 2018, 2019; Han et al., 2019). Methods like CDAC+ (Lin et al., 2020) utilize labeled data to guide clustering, creating a synergy between supervised knowledge and unsupervised discovery. The two-stage scheme, involving base model pretraining and iterative optimization (Zhang et al., 2021a,b; Wu et al., 2022; Wei et al., 2022; Zhang et al., 2023a; Zhou et al., 2023; Mou et al., 2023), has gained popularity. It benefits from pseudo label signals generated by the pretrained model, although it often struggles with",
|
| 261 |
+
"bbox": [
|
| 262 |
+
507,
|
| 263 |
+
657,
|
| 264 |
+
884,
|
| 265 |
+
883
|
| 266 |
+
],
|
| 267 |
+
"page_idx": 1
|
| 268 |
+
},
|
| 269 |
+
{
|
| 270 |
+
"type": "page_number",
|
| 271 |
+
"text": "2",
|
| 272 |
+
"bbox": [
|
| 273 |
+
492,
|
| 274 |
+
903,
|
| 275 |
+
505,
|
| 276 |
+
915
|
| 277 |
+
],
|
| 278 |
+
"page_idx": 1
|
| 279 |
+
},
|
| 280 |
+
{
|
| 281 |
+
"type": "footer",
|
| 282 |
+
"text": "7846",
|
| 283 |
+
"bbox": [
|
| 284 |
+
480,
|
| 285 |
+
927,
|
| 286 |
+
521,
|
| 287 |
+
941
|
| 288 |
+
],
|
| 289 |
+
"page_idx": 1
|
| 290 |
+
},
|
| 291 |
+
{
|
| 292 |
+
"type": "text",
|
| 293 |
+
"text": "the quality of pseudo labels and sample representations. Efforts to refine learning objectives, such as contrastive learning (Mou et al., 2022; Zhang et al., 2022a), aim to directly learn discriminative representations for new categories. Yet, the challenge remains in effectively decoupling pseudo label generation from representation learning (Wu et al., 2024), a gap our work addresses by introducing LLMs into the GCD.",
|
| 294 |
+
"bbox": [
|
| 295 |
+
114,
|
| 296 |
+
85,
|
| 297 |
+
492,
|
| 298 |
+
231
|
| 299 |
+
],
|
| 300 |
+
"page_idx": 2
|
| 301 |
+
},
|
| 302 |
+
{
|
| 303 |
+
"type": "text",
|
| 304 |
+
"text": "2.2 Active Learning in the Era of LLMs",
|
| 305 |
+
"text_level": 1,
|
| 306 |
+
"bbox": [
|
| 307 |
+
115,
|
| 308 |
+
242,
|
| 309 |
+
447,
|
| 310 |
+
258
|
| 311 |
+
],
|
| 312 |
+
"page_idx": 2
|
| 313 |
+
},
|
| 314 |
+
{
|
| 315 |
+
"type": "text",
|
| 316 |
+
"text": "Traditional Active Learning (AL): AL has traditionally been a solution to the data scarcity problem in NLP (Ren et al., 2022; Zhang et al., 2022b), focusing on identifying and annotating informative samples. Various acquisition strategies have been employed, including uncertainty-based (Wang and Shang, 2014; Schröder et al., 2022; Yu et al., 2023), diversity-based (Sener and Savarese, 2018; Gissin and Shalev-Shwartz, 2019; Citovsky et al., 2021), and hybrid methods (Liu et al., 2018; Zhan et al., 2022). While effective, these methods still rely on expensive human expertise for annotation.",
|
| 317 |
+
"bbox": [
|
| 318 |
+
114,
|
| 319 |
+
263,
|
| 320 |
+
489,
|
| 321 |
+
457
|
| 322 |
+
],
|
| 323 |
+
"page_idx": 2
|
| 324 |
+
},
|
| 325 |
+
{
|
| 326 |
+
"type": "text",
|
| 327 |
+
"text": "LLMs as a Game-Changer in AL: With the advent of LLMs, a new frontier in AL has been explored. LLMs are now being considered as cost-effective alternatives to human experts (Zhang et al., 2023c; Cheng et al., 2023; Zhang et al., 2023b; Margatina et al., 2023; Liao et al., 2023). For instance, Xiao et al. (2023) demonstrated the use of LLMs as active annotators, harnessing their ability to distill task-specific knowledge interactively. In our work, we further this exploration by applying AL with LLMs to GCD. Our unique contribution not only lies in the implementation of an uncertainty-driven propagation strategy to maximize the utility of LLMs in a cost-effective manner, but also in the design of a soft feedback propagation scheme to minimize the spread of inaccurate feedback.",
|
| 328 |
+
"bbox": [
|
| 329 |
+
114,
|
| 330 |
+
467,
|
| 331 |
+
489,
|
| 332 |
+
740
|
| 333 |
+
],
|
| 334 |
+
"page_idx": 2
|
| 335 |
+
},
|
| 336 |
+
{
|
| 337 |
+
"type": "text",
|
| 338 |
+
"text": "3 Methodology",
|
| 339 |
+
"text_level": 1,
|
| 340 |
+
"bbox": [
|
| 341 |
+
115,
|
| 342 |
+
753,
|
| 343 |
+
265,
|
| 344 |
+
771
|
| 345 |
+
],
|
| 346 |
+
"page_idx": 2
|
| 347 |
+
},
|
| 348 |
+
{
|
| 349 |
+
"type": "text",
|
| 350 |
+
"text": "3.1 Problem Formulation",
|
| 351 |
+
"text_level": 1,
|
| 352 |
+
"bbox": [
|
| 353 |
+
115,
|
| 354 |
+
780,
|
| 355 |
+
332,
|
| 356 |
+
794
|
| 357 |
+
],
|
| 358 |
+
"page_idx": 2
|
| 359 |
+
},
|
| 360 |
+
{
|
| 361 |
+
"type": "text",
|
| 362 |
+
"text": "We study the GCD problem defined as follows: Assuming we have a known category set $\\mathcal{C}_k$ and an unknown category set $\\mathcal{C}_u$ , where $\\{\\mathcal{C}_k \\cap \\mathcal{C}_u\\} = \\emptyset$ and $|\\mathcal{C}_k| + |\\mathcal{C}_u| = K$ . Here $K$ is the total number of categories. Under the semi-supervised GCD set",
|
| 363 |
+
"bbox": [
|
| 364 |
+
114,
|
| 365 |
+
801,
|
| 366 |
+
489,
|
| 367 |
+
882
|
| 368 |
+
],
|
| 369 |
+
"page_idx": 2
|
| 370 |
+
},
|
| 371 |
+
{
|
| 372 |
+
"type": "text",
|
| 373 |
+
"text": "ting, given a labeled data set $\\mathcal{D}_l = \\{(x_i,y_i)|y_i\\in$ $\\mathcal{C}_k\\}_{i = 1}^L$ , and an unlabeled data set $\\mathcal{D}_u = \\{x_j\\}_{j = 1}^U$ where the category of each $x_{j}$ belongs to $\\{\\mathcal{C}_k\\cup \\mathcal{C}_u\\}$ the task is to learn a representation extractor $\\mathcal{M}$ to identify all unknown categories from $\\mathcal{D}_u$ and perform accurate clustering to classify each $x_{i}$ in $\\{\\mathcal{D}_l\\cup \\mathcal{D}_u\\}$ into its corresponding category.",
|
| 374 |
+
"bbox": [
|
| 375 |
+
507,
|
| 376 |
+
84,
|
| 377 |
+
882,
|
| 378 |
+
200
|
| 379 |
+
],
|
| 380 |
+
"page_idx": 2
|
| 381 |
+
},
|
| 382 |
+
{
|
| 383 |
+
"type": "text",
|
| 384 |
+
"text": "3.2 Approach Overview",
|
| 385 |
+
"text_level": 1,
|
| 386 |
+
"bbox": [
|
| 387 |
+
509,
|
| 388 |
+
208,
|
| 389 |
+
714,
|
| 390 |
+
223
|
| 391 |
+
],
|
| 392 |
+
"page_idx": 2
|
| 393 |
+
},
|
| 394 |
+
{
|
| 395 |
+
"type": "text",
|
| 396 |
+
"text": "General GCD methods typically first extract representations $\\mathcal{Z} = \\{\\pmb {z}_i\\}_{i = 1}^{\\left|\\mathcal{D}_l\\cup \\mathcal{D}_u\\right|}$ via model $\\mathcal{M}$ for each sample $x_{i}$ and then perform K-Means to locate cluster centers $\\{\\pmb {\\mu}_i\\}_{i = 1}^K$ for doing GCD. Our proposed ALUP builds upon existing GCD models and effectively incorporates LLMs' feedback in an active learning scheme.",
|
| 397 |
+
"bbox": [
|
| 398 |
+
507,
|
| 399 |
+
228,
|
| 400 |
+
882,
|
| 401 |
+
343
|
| 402 |
+
],
|
| 403 |
+
"page_idx": 2
|
| 404 |
+
},
|
| 405 |
+
{
|
| 406 |
+
"type": "text",
|
| 407 |
+
"text": "Figure 2 depicts an overview of our ALUP framework for GCD. It encompasses three key designs: Uncertainty Propagation for sample selection, Comparison-based Prompting for soliciting LLMs' feedback, and Soft Feedback Propagation for wisely spreading the feedback. In what follows, we will detail these designs separately.",
|
| 408 |
+
"bbox": [
|
| 409 |
+
507,
|
| 410 |
+
343,
|
| 411 |
+
882,
|
| 412 |
+
456
|
| 413 |
+
],
|
| 414 |
+
"page_idx": 2
|
| 415 |
+
},
|
| 416 |
+
{
|
| 417 |
+
"type": "text",
|
| 418 |
+
"text": "3.3 Uncertainty Propagation (UP)",
|
| 419 |
+
"text_level": 1,
|
| 420 |
+
"bbox": [
|
| 421 |
+
507,
|
| 422 |
+
466,
|
| 423 |
+
793,
|
| 424 |
+
482
|
| 425 |
+
],
|
| 426 |
+
"page_idx": 2
|
| 427 |
+
},
|
| 428 |
+
{
|
| 429 |
+
"type": "text",
|
| 430 |
+
"text": "Within the ALUP framework, we design the uncertainty propagation to select the most informative unlabeled samples that are representative of high-uncertainty regions. Note that given a general GCD model $\\mathcal{M}$ , we can extract representations $z_{i}$ for each $x_{i}$ in the dataset and perform $K$ -means to locate cluster centers $\\{\\pmb{\\mu}_k\\}_{k=1}^K$ . To estimate the model predictive uncertainty, following Xie et al. (2016), we use the Student's $t$ -distribution to compute the probability of assigning the sample $x_{i}$ to each cluster $k$ :",
|
| 431 |
+
"bbox": [
|
| 432 |
+
507,
|
| 433 |
+
486,
|
| 434 |
+
882,
|
| 435 |
+
662
|
| 436 |
+
],
|
| 437 |
+
"page_idx": 2
|
| 438 |
+
},
|
| 439 |
+
{
|
| 440 |
+
"type": "equation",
|
| 441 |
+
"text": "\n$$\nq _ {i k} = \\frac {\\left(1 + \\| \\boldsymbol {z} _ {i} - \\boldsymbol {\\mu} _ {k} \\| ^ {2} / \\alpha\\right) ^ {- \\frac {\\alpha + 1}{2}}}{\\sum_ {k ^ {\\prime}} \\left(1 + \\| \\boldsymbol {z} _ {i} - \\boldsymbol {\\mu} _ {k ^ {\\prime}} \\| ^ {2} / \\alpha\\right) ^ {- \\frac {\\alpha + 1}{2}}}, \\tag {1}\n$$\n",
|
| 442 |
+
"text_format": "latex",
|
| 443 |
+
"bbox": [
|
| 444 |
+
547,
|
| 445 |
+
667,
|
| 446 |
+
880,
|
| 447 |
+
709
|
| 448 |
+
],
|
| 449 |
+
"page_idx": 2
|
| 450 |
+
},
|
| 451 |
+
{
|
| 452 |
+
"type": "text",
|
| 453 |
+
"text": "where $\\alpha$ represents the degrees of freedom in the Student's $t$ -distribution. After obtaining the model predictive probabilities, we use the entropy (Lewis and Gale, 1994) to measure the uncertainty for each sample $x_{i}$ :",
|
| 454 |
+
"bbox": [
|
| 455 |
+
507,
|
| 456 |
+
715,
|
| 457 |
+
880,
|
| 458 |
+
796
|
| 459 |
+
],
|
| 460 |
+
"page_idx": 2
|
| 461 |
+
},
|
| 462 |
+
{
|
| 463 |
+
"type": "equation",
|
| 464 |
+
"text": "\n$$\nu \\left(x _ {i}\\right) = - \\sum_ {k = 1} ^ {K} q _ {i k} \\log q _ {i k}. \\tag {2}\n$$\n",
|
| 465 |
+
"text_format": "latex",
|
| 466 |
+
"bbox": [
|
| 467 |
+
600,
|
| 468 |
+
799,
|
| 469 |
+
880,
|
| 470 |
+
841
|
| 471 |
+
],
|
| 472 |
+
"page_idx": 2
|
| 473 |
+
},
|
| 474 |
+
{
|
| 475 |
+
"type": "text",
|
| 476 |
+
"text": "Here, a higher $u(x_{i})$ can indicate a higher likelihood of the model $\\mathcal{M}$ incorrectly assigning $x_{i}$ to a",
|
| 477 |
+
"bbox": [
|
| 478 |
+
507,
|
| 479 |
+
848,
|
| 480 |
+
882,
|
| 481 |
+
882
|
| 482 |
+
],
|
| 483 |
+
"page_idx": 2
|
| 484 |
+
},
|
| 485 |
+
{
|
| 486 |
+
"type": "page_number",
|
| 487 |
+
"text": "3",
|
| 488 |
+
"bbox": [
|
| 489 |
+
492,
|
| 490 |
+
903,
|
| 491 |
+
504,
|
| 492 |
+
915
|
| 493 |
+
],
|
| 494 |
+
"page_idx": 2
|
| 495 |
+
},
|
| 496 |
+
{
|
| 497 |
+
"type": "footer",
|
| 498 |
+
"text": "7847",
|
| 499 |
+
"bbox": [
|
| 500 |
+
480,
|
| 501 |
+
927,
|
| 502 |
+
519,
|
| 503 |
+
940
|
| 504 |
+
],
|
| 505 |
+
"page_idx": 2
|
| 506 |
+
},
|
| 507 |
+
{
|
| 508 |
+
"type": "image",
|
| 509 |
+
"img_path": "images/0972191673cb57e2057e76a82161f5eedbf200aec4bf49f784b519846bec5a1d.jpg",
|
| 510 |
+
"image_caption": [
|
| 511 |
+
"Figure 2: The overall ALUP framework. It consists of three main designs: Uncertainty Propagation for region-based sample selection, Comparison-based Prompting for soliciting more accurate LLM's feedback, and Soft Feedback Propagation for wisely spreading the feedback to boost both efficiency and effectiveness."
|
| 512 |
+
],
|
| 513 |
+
"image_footnote": [],
|
| 514 |
+
"bbox": [
|
| 515 |
+
117,
|
| 516 |
+
84,
|
| 517 |
+
868,
|
| 518 |
+
324
|
| 519 |
+
],
|
| 520 |
+
"page_idx": 3
|
| 521 |
+
},
|
| 522 |
+
{
|
| 523 |
+
"type": "text",
|
| 524 |
+
"text": "wrong cluster. However, directly adopting this individual uncertainty score for selecting samples can lead to suboptimal outcomes as it can be sensitive to outliers (Karamcheti et al., 2021). To address this issue, following Yu et al. (2023), we further measure the similarities between each sample and its neighbors and propagate the individual uncertainty score to neighbors. Specifically, for each data point $x_{i}$ , we first find its $k$ -nearest neighbors based on the Euclidean distance as:",
|
| 525 |
+
"bbox": [
|
| 526 |
+
114,
|
| 527 |
+
401,
|
| 528 |
+
489,
|
| 529 |
+
561
|
| 530 |
+
],
|
| 531 |
+
"page_idx": 3
|
| 532 |
+
},
|
| 533 |
+
{
|
| 534 |
+
"type": "equation",
|
| 535 |
+
"text": "\n$$\n\\mathcal {N} \\left(x _ {i}\\right) = \\underset {\\text {t o p} - k} {\\operatorname {K N N}} \\left(\\boldsymbol {z} _ {i}, \\mathcal {Z} ^ {u}\\right), \\tag {3}\n$$\n",
|
| 536 |
+
"text_format": "latex",
|
| 537 |
+
"bbox": [
|
| 538 |
+
206,
|
| 539 |
+
576,
|
| 540 |
+
487,
|
| 541 |
+
601
|
| 542 |
+
],
|
| 543 |
+
"page_idx": 3
|
| 544 |
+
},
|
| 545 |
+
{
|
| 546 |
+
"type": "text",
|
| 547 |
+
"text": "where $\\mathcal{Z}^u$ denotes the representations of unlabeled samples and $\\mathcal{N}(x_i)$ represents the set of nearest neighbors of $x_{i}$ . Then, we calculate the similarities between $x_{i}$ and its neighbors based on the radial basis function (RBF) (Scholkopf et al., 1997):",
|
| 548 |
+
"bbox": [
|
| 549 |
+
114,
|
| 550 |
+
612,
|
| 551 |
+
487,
|
| 552 |
+
694
|
| 553 |
+
],
|
| 554 |
+
"page_idx": 3
|
| 555 |
+
},
|
| 556 |
+
{
|
| 557 |
+
"type": "equation",
|
| 558 |
+
"text": "\n$$\n\\operatorname {s i m} \\left(\\boldsymbol {z} _ {i}, \\boldsymbol {z} _ {j}\\right) = \\exp \\left(- \\rho \\| \\boldsymbol {z} _ {i} - \\boldsymbol {z} _ {j} \\| _ {2} ^ {2}\\right), \\tag {4}\n$$\n",
|
| 559 |
+
"text_format": "latex",
|
| 560 |
+
"bbox": [
|
| 561 |
+
164,
|
| 562 |
+
705,
|
| 563 |
+
487,
|
| 564 |
+
725
|
| 565 |
+
],
|
| 566 |
+
"page_idx": 3
|
| 567 |
+
},
|
| 568 |
+
{
|
| 569 |
+
"type": "text",
|
| 570 |
+
"text": "where $x_{j}\\in \\mathcal{N}(x_{i})$ and $\\rho$ is a hyper-parameter that regulates the extent of uncertainty propagation. After measuring the similarities, we refine the uncertainty score of sample $x_{i}$ as:",
|
| 571 |
+
"bbox": [
|
| 572 |
+
114,
|
| 573 |
+
736,
|
| 574 |
+
489,
|
| 575 |
+
801
|
| 576 |
+
],
|
| 577 |
+
"page_idx": 3
|
| 578 |
+
},
|
| 579 |
+
{
|
| 580 |
+
"type": "equation",
|
| 581 |
+
"text": "\n$$\nu (x _ {i}) = u (x _ {i}) + \\frac {\\sum_ {x _ {j} \\in \\mathcal {N} (x _ {i})} \\operatorname {s i m} \\left(\\boldsymbol {z} _ {i} , \\boldsymbol {z} _ {j}\\right) \\cdot u (x _ {j})}{\\left| \\mathcal {N} (x _ {i}) \\right|}. \\tag {5}\n$$\n",
|
| 582 |
+
"text_format": "latex",
|
| 583 |
+
"bbox": [
|
| 584 |
+
126,
|
| 585 |
+
810,
|
| 586 |
+
487,
|
| 587 |
+
841
|
| 588 |
+
],
|
| 589 |
+
"page_idx": 3
|
| 590 |
+
},
|
| 591 |
+
{
|
| 592 |
+
"type": "text",
|
| 593 |
+
"text": "After several rounds of uncertainty score propagation, we obtain the final uncertainty score $u(x_{i})$ .",
|
| 594 |
+
"bbox": [
|
| 595 |
+
115,
|
| 596 |
+
850,
|
| 597 |
+
489,
|
| 598 |
+
882
|
| 599 |
+
],
|
| 600 |
+
"page_idx": 3
|
| 601 |
+
},
|
| 602 |
+
{
|
| 603 |
+
"type": "text",
|
| 604 |
+
"text": "Based on this score, we greedily select one sample $x_{i}^{q}$ from each cluster $c_{i}$ to form the sample set $\\mathcal{Q}$ :",
|
| 605 |
+
"bbox": [
|
| 606 |
+
507,
|
| 607 |
+
401,
|
| 608 |
+
880,
|
| 609 |
+
436
|
| 610 |
+
],
|
| 611 |
+
"page_idx": 3
|
| 612 |
+
},
|
| 613 |
+
{
|
| 614 |
+
"type": "equation",
|
| 615 |
+
"text": "\n$$\nx _ {i} ^ {q} = \\underset {x _ {j} \\in c _ {i}} {\\operatorname {a r g m a x}} (u (x _ {j})). \\tag {6}\n$$\n",
|
| 616 |
+
"text_format": "latex",
|
| 617 |
+
"bbox": [
|
| 618 |
+
611,
|
| 619 |
+
444,
|
| 620 |
+
880,
|
| 621 |
+
472
|
| 622 |
+
],
|
| 623 |
+
"page_idx": 3
|
| 624 |
+
},
|
| 625 |
+
{
|
| 626 |
+
"type": "text",
|
| 627 |
+
"text": "We emphasize that a sample will exhibit higher propagated uncertainty only when it and its neighboring samples both possess high uncertainty levels. Hence, we are selecting samples from uncertain regions. By actively obtaining feedback from LLMs for such samples in $\\mathcal{Q}$ , we can significantly improve the model performance in GCD.",
|
| 628 |
+
"bbox": [
|
| 629 |
+
507,
|
| 630 |
+
480,
|
| 631 |
+
882,
|
| 632 |
+
593
|
| 633 |
+
],
|
| 634 |
+
"page_idx": 3
|
| 635 |
+
},
|
| 636 |
+
{
|
| 637 |
+
"type": "text",
|
| 638 |
+
"text": "3.4 Comparison-based Prompting (CP)",
|
| 639 |
+
"text_level": 1,
|
| 640 |
+
"bbox": [
|
| 641 |
+
507,
|
| 642 |
+
604,
|
| 643 |
+
835,
|
| 644 |
+
620
|
| 645 |
+
],
|
| 646 |
+
"page_idx": 3
|
| 647 |
+
},
|
| 648 |
+
{
|
| 649 |
+
"type": "text",
|
| 650 |
+
"text": "Upon identifying the most informative unlabeled samples through the UP strategy, we need to query LLMs to obtain pseudo category labels for these samples. However, since the category labels of newly emerged categories remain unknown, it is infeasible to request LLMs to directly generate possibly a brand new label for each selected sample. To overcome this, we design a comparison-based prompting method from the clustering perspective, which prompts LLMs to classify a sample by comparing it with other samples representing distinct categories.",
|
| 651 |
+
"bbox": [
|
| 652 |
+
505,
|
| 653 |
+
624,
|
| 654 |
+
882,
|
| 655 |
+
816
|
| 656 |
+
],
|
| 657 |
+
"page_idx": 3
|
| 658 |
+
},
|
| 659 |
+
{
|
| 660 |
+
"type": "text",
|
| 661 |
+
"text": "This CP method requires the selection of a representative sample for each category cluster. To this end, we first compute the distances of various samples within the cluster to its center $\\pmb{\\mu}_i$ , and then",
|
| 662 |
+
"bbox": [
|
| 663 |
+
507,
|
| 664 |
+
818,
|
| 665 |
+
880,
|
| 666 |
+
882
|
| 667 |
+
],
|
| 668 |
+
"page_idx": 3
|
| 669 |
+
},
|
| 670 |
+
{
|
| 671 |
+
"type": "page_number",
|
| 672 |
+
"text": "4",
|
| 673 |
+
"bbox": [
|
| 674 |
+
492,
|
| 675 |
+
903,
|
| 676 |
+
505,
|
| 677 |
+
915
|
| 678 |
+
],
|
| 679 |
+
"page_idx": 3
|
| 680 |
+
},
|
| 681 |
+
{
|
| 682 |
+
"type": "footer",
|
| 683 |
+
"text": "7848",
|
| 684 |
+
"bbox": [
|
| 685 |
+
480,
|
| 686 |
+
927,
|
| 687 |
+
519,
|
| 688 |
+
940
|
| 689 |
+
],
|
| 690 |
+
"page_idx": 3
|
| 691 |
+
},
|
| 692 |
+
{
|
| 693 |
+
"type": "text",
|
| 694 |
+
"text": "select the sample closest to $\\pmb{\\mu}_i$ to represent this cluster. We denote this close-to-center sample as $\\mu_{i}$ With these close-to-center samples $S = \\{\\mu_i\\}_{i = 1}^K$ we construct the prompt to query LLMs as:",
|
| 695 |
+
"bbox": [
|
| 696 |
+
114,
|
| 697 |
+
85,
|
| 698 |
+
489,
|
| 699 |
+
151
|
| 700 |
+
],
|
| 701 |
+
"page_idx": 4
|
| 702 |
+
},
|
| 703 |
+
{
|
| 704 |
+
"type": "text",
|
| 705 |
+
"text": "Cluster $[c_1]$ : Sample $[\\mu_1]$ ; Cluster $[c_2]$ : Sample $[\\mu_2];\\ldots$ ; Cluster $[c_p]$ : Sample $[\\mu_p]$ . Above is a list of samples representing distinct categories. Please identify one sample that shares the same or similar underlying category as the input sample from the provided list.",
|
| 706 |
+
"bbox": [
|
| 707 |
+
114,
|
| 708 |
+
166,
|
| 709 |
+
489,
|
| 710 |
+
247
|
| 711 |
+
],
|
| 712 |
+
"page_idx": 4
|
| 713 |
+
},
|
| 714 |
+
{
|
| 715 |
+
"type": "text",
|
| 716 |
+
"text": "Here, $p$ is the number of representative samples used for the comparison. In our experiments, for each $x_{i}^{q}$ in $\\mathcal{Q}$ , we empirically incorporate $p = |\\mathcal{Q}| / 2$ representative samples that are closest to $x_{i}^{q}$ into the prompt. With this design, we can effectively utilize LLMs to classify the selected samples into their corresponding categories, denoted as $\\mathcal{Q} = \\{x_{i}^{q}, y_{i}^{LLM}\\}_{i=1}^{K}$ , thus bypassing the requirement for explicit labels of unknown categories.",
|
| 717 |
+
"bbox": [
|
| 718 |
+
114,
|
| 719 |
+
263,
|
| 720 |
+
489,
|
| 721 |
+
425
|
| 722 |
+
],
|
| 723 |
+
"page_idx": 4
|
| 724 |
+
},
|
| 725 |
+
{
|
| 726 |
+
"type": "text",
|
| 727 |
+
"text": "3.5 Soft Feedback Propagation (SFP)",
|
| 728 |
+
"text_level": 1,
|
| 729 |
+
"bbox": [
|
| 730 |
+
115,
|
| 731 |
+
435,
|
| 732 |
+
423,
|
| 733 |
+
451
|
| 734 |
+
],
|
| 735 |
+
"page_idx": 4
|
| 736 |
+
},
|
| 737 |
+
{
|
| 738 |
+
"type": "text",
|
| 739 |
+
"text": "By querying LLMs using the CP method, we can endow the selected unlabeled samples with their respective pseudo labels to augment the GCD models for discerning new categories. However, a performance gap persists between the partially and fully LLM-augmented GCD models. Given that the selection of the unlabeled samples is based on their model predictive uncertainty and neighboring uncertainty, and samples distributed close to each other are more likely to share the same category, we thus propose a soft feedback propagation mechanism to propagate the pseudo labels generated by LLMs across their similar neighbors, amplifying the utility of the feedback from LLMs without any additional cost. Specifically, for each $x_{i}^{q}$ in $\\mathcal{Q}$ , we refine the model prediction $q_{j}$ of its uncertain neighbor $x_{j} \\in \\mathcal{N}(x_{i}^{q})$ in Equation (1) to propagate the LLM-generated pseudo label $y_{i}^{LLM}$ :",
|
| 740 |
+
"bbox": [
|
| 741 |
+
114,
|
| 742 |
+
456,
|
| 743 |
+
489,
|
| 744 |
+
746
|
| 745 |
+
],
|
| 746 |
+
"page_idx": 4
|
| 747 |
+
},
|
| 748 |
+
{
|
| 749 |
+
"type": "equation",
|
| 750 |
+
"text": "\n$$\n\\boldsymbol {q} _ {j} = \\left(1 - \\operatorname {s i m} \\left(\\boldsymbol {z} _ {j}, \\boldsymbol {z} _ {i} ^ {q}\\right)\\right) \\cdot \\boldsymbol {q} _ {j} + \\operatorname {s i m} \\left(\\boldsymbol {z} _ {j}, \\boldsymbol {z} _ {i} ^ {q}\\right) \\cdot \\boldsymbol {y} ^ {L L M}, \\tag {7}\n$$\n",
|
| 751 |
+
"text_format": "latex",
|
| 752 |
+
"bbox": [
|
| 753 |
+
124,
|
| 754 |
+
758,
|
| 755 |
+
487,
|
| 756 |
+
776
|
| 757 |
+
],
|
| 758 |
+
"page_idx": 4
|
| 759 |
+
},
|
| 760 |
+
{
|
| 761 |
+
"type": "equation",
|
| 762 |
+
"text": "\n$$\ny _ {j} ^ {\\text {p r o p}} = \\left\\{ \\begin{array}{l l} y _ {i} ^ {L L M}, & \\text {i f} \\operatorname {a r g m a x} \\left(\\boldsymbol {q} _ {j}\\right) = y _ {i} ^ {L L M} \\\\ - 1, & \\text {o t h e r w i s e} \\end{array} , \\right. \\tag {8}\n$$\n",
|
| 763 |
+
"text_format": "latex",
|
| 764 |
+
"bbox": [
|
| 765 |
+
124,
|
| 766 |
+
785,
|
| 767 |
+
487,
|
| 768 |
+
828
|
| 769 |
+
],
|
| 770 |
+
"page_idx": 4
|
| 771 |
+
},
|
| 772 |
+
{
|
| 773 |
+
"type": "text",
|
| 774 |
+
"text": "where $sim(\\cdot, \\cdot)$ denotes the similarity function defined in Equation (4). $\\pmb{y}^{LLM}$ is a one-hot vector where the value of position $y_{i}^{LLM}$ is set to 1. To",
|
| 775 |
+
"bbox": [
|
| 776 |
+
114,
|
| 777 |
+
834,
|
| 778 |
+
489,
|
| 779 |
+
882
|
| 780 |
+
],
|
| 781 |
+
"page_idx": 4
|
| 782 |
+
},
|
| 783 |
+
{
|
| 784 |
+
"type": "text",
|
| 785 |
+
"text": "interpret the Equation (8), we argue that when the uncertain neighbor $x_{j} \\in \\mathcal{N}(x_{i}^{q})$ is assigned to the same cluster as the LLM-labeled sample $x_{i}^{q}$ according to the refined $q_{j}$ , the pseudo label $y_{i}^{LLM}$ will be propagated to the $x_{j}$ . Otherwise, the $x_{j}$ will reject the pseudo label $y_{i}^{LLM}$ and remain as an unlabeled sample.",
|
| 786 |
+
"bbox": [
|
| 787 |
+
507,
|
| 788 |
+
85,
|
| 789 |
+
882,
|
| 790 |
+
198
|
| 791 |
+
],
|
| 792 |
+
"page_idx": 4
|
| 793 |
+
},
|
| 794 |
+
{
|
| 795 |
+
"type": "text",
|
| 796 |
+
"text": "3.6 Model Optimization",
|
| 797 |
+
"text_level": 1,
|
| 798 |
+
"bbox": [
|
| 799 |
+
507,
|
| 800 |
+
210,
|
| 801 |
+
714,
|
| 802 |
+
225
|
| 803 |
+
],
|
| 804 |
+
"page_idx": 4
|
| 805 |
+
},
|
| 806 |
+
{
|
| 807 |
+
"type": "text",
|
| 808 |
+
"text": "After obtaining pseudo labels for the selected unlabeled samples in $\\mathcal{Q}$ from LLMs and propagating these labels via SFP, we update the model using a supervised contrastive learning loss (Gao et al., 2021; Guo et al., 2022) as follows:",
|
| 809 |
+
"bbox": [
|
| 810 |
+
507,
|
| 811 |
+
231,
|
| 812 |
+
880,
|
| 813 |
+
312
|
| 814 |
+
],
|
| 815 |
+
"page_idx": 4
|
| 816 |
+
},
|
| 817 |
+
{
|
| 818 |
+
"type": "equation",
|
| 819 |
+
"text": "\n$$\n\\mathcal {L} = \\sum_ {i = 1} ^ {L ^ {\\prime}} - \\frac {1}{| \\mathcal {N} ^ {\\prime} (x _ {i}) |} \\sum_ {x _ {j} \\in \\mathcal {N} ^ {\\prime} (x _ {i})} \\log \\frac {e ^ {s i m \\left(z _ {i} , z _ {j}\\right) / \\tau}}{\\sum_ {k \\neq i} e ^ {s i m \\left(z _ {i} , z _ {k}\\right) / \\tau}}, \\tag {9}\n$$\n",
|
| 820 |
+
"text_format": "latex",
|
| 821 |
+
"bbox": [
|
| 822 |
+
519,
|
| 823 |
+
323,
|
| 824 |
+
880,
|
| 825 |
+
353
|
| 826 |
+
],
|
| 827 |
+
"page_idx": 4
|
| 828 |
+
},
|
| 829 |
+
{
|
| 830 |
+
"type": "text",
|
| 831 |
+
"text": "where $L^{\\prime}$ denotes the total number of labeled samples, including both the original labeled samples and the newly labeled samples obtained via the CP and SFP. $\\mathcal{N}'(x_i)$ is the set of samples sharing the same category label with $x_{i}$ . $\\tau$ is the temperature.",
|
| 832 |
+
"bbox": [
|
| 833 |
+
507,
|
| 834 |
+
366,
|
| 835 |
+
880,
|
| 836 |
+
447
|
| 837 |
+
],
|
| 838 |
+
"page_idx": 4
|
| 839 |
+
},
|
| 840 |
+
{
|
| 841 |
+
"type": "text",
|
| 842 |
+
"text": "4 Experiments",
|
| 843 |
+
"text_level": 1,
|
| 844 |
+
"bbox": [
|
| 845 |
+
507,
|
| 846 |
+
460,
|
| 847 |
+
655,
|
| 848 |
+
476
|
| 849 |
+
],
|
| 850 |
+
"page_idx": 4
|
| 851 |
+
},
|
| 852 |
+
{
|
| 853 |
+
"type": "text",
|
| 854 |
+
"text": "4.1 Datasets",
|
| 855 |
+
"text_level": 1,
|
| 856 |
+
"bbox": [
|
| 857 |
+
507,
|
| 858 |
+
487,
|
| 859 |
+
621,
|
| 860 |
+
500
|
| 861 |
+
],
|
| 862 |
+
"page_idx": 4
|
| 863 |
+
},
|
| 864 |
+
{
|
| 865 |
+
"type": "text",
|
| 866 |
+
"text": "We conduct experiments on three GCD datasets: BANKING (Casanueva et al., 2020), CLINC (Larson et al., 2019), and StackOverflow (Xu et al., 2015). The detailed statistics are reported in Appendix A.1. In our experiments, we keep the same train, development, and test splits as previous work (Liang and Liao, 2023). More experimental details are provided in the Appendix A.2.",
|
| 867 |
+
"bbox": [
|
| 868 |
+
507,
|
| 869 |
+
508,
|
| 870 |
+
880,
|
| 871 |
+
637
|
| 872 |
+
],
|
| 873 |
+
"page_idx": 4
|
| 874 |
+
},
|
| 875 |
+
{
|
| 876 |
+
"type": "text",
|
| 877 |
+
"text": "4.2 Evaluation Metrics",
|
| 878 |
+
"text_level": 1,
|
| 879 |
+
"bbox": [
|
| 880 |
+
507,
|
| 881 |
+
650,
|
| 882 |
+
705,
|
| 883 |
+
664
|
| 884 |
+
],
|
| 885 |
+
"page_idx": 4
|
| 886 |
+
},
|
| 887 |
+
{
|
| 888 |
+
"type": "text",
|
| 889 |
+
"text": "Following (Zhang et al., 2022a; Liang and Liao, 2023), we adopt the three metrics for evaluating the GCD performance: Accuracy (ACC) based on the Hungarian algorithm, Adjusted Rand Index (ARI), and Normalized Mutual Information (NMI). The specific definitions are presented in Appendix A.3. It is worth noting that ACC is regarded as the primary metric for evaluation, with higher values indicating better GCD performance.",
|
| 890 |
+
"bbox": [
|
| 891 |
+
507,
|
| 892 |
+
671,
|
| 893 |
+
880,
|
| 894 |
+
816
|
| 895 |
+
],
|
| 896 |
+
"page_idx": 4
|
| 897 |
+
},
|
| 898 |
+
{
|
| 899 |
+
"type": "text",
|
| 900 |
+
"text": "4.3 Baselines",
|
| 901 |
+
"text_level": 1,
|
| 902 |
+
"bbox": [
|
| 903 |
+
507,
|
| 904 |
+
828,
|
| 905 |
+
628,
|
| 906 |
+
841
|
| 907 |
+
],
|
| 908 |
+
"page_idx": 4
|
| 909 |
+
},
|
| 910 |
+
{
|
| 911 |
+
"type": "text",
|
| 912 |
+
"text": "We compare with the following SOTA GCD methods: DTC (Han et al., 2019), CDAC+ (Lin et al.,",
|
| 913 |
+
"bbox": [
|
| 914 |
+
507,
|
| 915 |
+
848,
|
| 916 |
+
880,
|
| 917 |
+
881
|
| 918 |
+
],
|
| 919 |
+
"page_idx": 4
|
| 920 |
+
},
|
| 921 |
+
{
|
| 922 |
+
"type": "page_number",
|
| 923 |
+
"text": "5",
|
| 924 |
+
"bbox": [
|
| 925 |
+
492,
|
| 926 |
+
903,
|
| 927 |
+
504,
|
| 928 |
+
914
|
| 929 |
+
],
|
| 930 |
+
"page_idx": 4
|
| 931 |
+
},
|
| 932 |
+
{
|
| 933 |
+
"type": "footer",
|
| 934 |
+
"text": "7849",
|
| 935 |
+
"bbox": [
|
| 936 |
+
480,
|
| 937 |
+
927,
|
| 938 |
+
519,
|
| 939 |
+
940
|
| 940 |
+
],
|
| 941 |
+
"page_idx": 4
|
| 942 |
+
},
|
| 943 |
+
{
|
| 944 |
+
"type": "text",
|
| 945 |
+
"text": "2020), DeepAligned (Zhang et al., 2021b), ProbNID (Zhou et al., 2023), DCSC (Wei et al., 2022), MTP-CLNN (Zhang et al., 2022a), USNID (Zhang et al., 2023a), and the best-performing method CsePL (Liang and Liao, 2023). We leave the details of these baselines in Appendix A.4.",
|
| 946 |
+
"bbox": [
|
| 947 |
+
114,
|
| 948 |
+
85,
|
| 949 |
+
489,
|
| 950 |
+
183
|
| 951 |
+
],
|
| 952 |
+
"page_idx": 5
|
| 953 |
+
},
|
| 954 |
+
{
|
| 955 |
+
"type": "text",
|
| 956 |
+
"text": "4.4 Main Results",
|
| 957 |
+
"text_level": 1,
|
| 958 |
+
"bbox": [
|
| 959 |
+
115,
|
| 960 |
+
193,
|
| 961 |
+
268,
|
| 962 |
+
208
|
| 963 |
+
],
|
| 964 |
+
"page_idx": 5
|
| 965 |
+
},
|
| 966 |
+
{
|
| 967 |
+
"type": "text",
|
| 968 |
+
"text": "4.4.1 GCD Performance Comparison",
|
| 969 |
+
"text_level": 1,
|
| 970 |
+
"bbox": [
|
| 971 |
+
115,
|
| 972 |
+
215,
|
| 973 |
+
425,
|
| 974 |
+
231
|
| 975 |
+
],
|
| 976 |
+
"page_idx": 5
|
| 977 |
+
},
|
| 978 |
+
{
|
| 979 |
+
"type": "text",
|
| 980 |
+
"text": "Table 1 presents the main GCD results of our proposed ALUP against existing baselines, where the peak performance is highlighted in bold. Generally speaking, our ALUP consistently outperforms all existing baselines across three datasets by large margins. We analyze the results as follows:",
|
| 981 |
+
"bbox": [
|
| 982 |
+
114,
|
| 983 |
+
234,
|
| 984 |
+
489,
|
| 985 |
+
331
|
| 986 |
+
],
|
| 987 |
+
"page_idx": 5
|
| 988 |
+
},
|
| 989 |
+
{
|
| 990 |
+
"type": "text",
|
| 991 |
+
"text": "Comparison of different methods in GCD: Table 1 reveals that ALUP significantly outperforms the existing leading baselines, such as CsePL and USNID. For example, the proposed ALUP surpasses previous SOTA CsePL by margins of $2.51\\%$ in ACC, $2.12\\%$ in ARI, and $1.14\\%$ in NMI on BANKING- $50\\%$ . Notably, the performance gains are more pronounced when a larger number of categories remain unknown. For example, ALUP's ACC improves by $3.55\\%$ on BANKING- $25\\%$ . This proves that the ALUP can acquire effective supervision signals from LLMs, enhancing the model performance in discovering new categories.",
|
| 992 |
+
"bbox": [
|
| 993 |
+
114,
|
| 994 |
+
340,
|
| 995 |
+
489,
|
| 996 |
+
551
|
| 997 |
+
],
|
| 998 |
+
"page_idx": 5
|
| 999 |
+
},
|
| 1000 |
+
{
|
| 1001 |
+
"type": "text",
|
| 1002 |
+
"text": "Comparison of different datasets: We evaluate the performance of the ALUP framework on different datasets, including the single-domain, fine-grained BANKING dataset, and the multi-domain CLINC dataset. From Table 1, we can notice that all existing methods exhibit significantly lower performance on BANKING compared to CLINC, indicating that the single-domain fine-grained scenario is more challenging for GCD. However, ALUP achieves a more significant improvement of $1\\% \\sim 3\\%$ on BANKING- $50\\%$ compared with the CsePL, while only $0.8\\% \\sim 2\\%$ on CLINC- $50\\%$ . This observation further strengthens the benefits of our ALUP in providing effective supervision signals to cope with the challenges in fine-grained category discovery.",
|
| 1003 |
+
"bbox": [
|
| 1004 |
+
114,
|
| 1005 |
+
558,
|
| 1006 |
+
489,
|
| 1007 |
+
818
|
| 1008 |
+
],
|
| 1009 |
+
"page_idx": 5
|
| 1010 |
+
},
|
| 1011 |
+
{
|
| 1012 |
+
"type": "text",
|
| 1013 |
+
"text": "4.5 In-depth Analyses",
|
| 1014 |
+
"text_level": 1,
|
| 1015 |
+
"bbox": [
|
| 1016 |
+
115,
|
| 1017 |
+
828,
|
| 1018 |
+
307,
|
| 1019 |
+
845
|
| 1020 |
+
],
|
| 1021 |
+
"page_idx": 5
|
| 1022 |
+
},
|
| 1023 |
+
{
|
| 1024 |
+
"type": "text",
|
| 1025 |
+
"text": "In this subsection, we conduct further detailed analyses to explore the impact of each key component",
|
| 1026 |
+
"bbox": [
|
| 1027 |
+
114,
|
| 1028 |
+
850,
|
| 1029 |
+
489,
|
| 1030 |
+
882
|
| 1031 |
+
],
|
| 1032 |
+
"page_idx": 5
|
| 1033 |
+
},
|
| 1034 |
+
{
|
| 1035 |
+
"type": "text",
|
| 1036 |
+
"text": "within the proposed ALUP framework.",
|
| 1037 |
+
"bbox": [
|
| 1038 |
+
507,
|
| 1039 |
+
85,
|
| 1040 |
+
801,
|
| 1041 |
+
101
|
| 1042 |
+
],
|
| 1043 |
+
"page_idx": 5
|
| 1044 |
+
},
|
| 1045 |
+
{
|
| 1046 |
+
"type": "text",
|
| 1047 |
+
"text": "4.5.1 Effect of Uncertainty Propagation",
|
| 1048 |
+
"text_level": 1,
|
| 1049 |
+
"bbox": [
|
| 1050 |
+
507,
|
| 1051 |
+
124,
|
| 1052 |
+
836,
|
| 1053 |
+
141
|
| 1054 |
+
],
|
| 1055 |
+
"page_idx": 5
|
| 1056 |
+
},
|
| 1057 |
+
{
|
| 1058 |
+
"type": "text",
|
| 1059 |
+
"text": "Table 2 presents the experimental results of removing the UP strategy in Equation (5) from ALUP on the BANKING dataset. It observes a significant reduction in GCD performance across various known category ratios upon removal. In particular, the ACC of the ALUP decreases by $1.20\\%$ while the ARI and NMI drop $1.34\\%$ and $0.64\\%$ on BANKING- $25\\%$ , respectively. This indicates that the UP strategy can accurately identify the most informative samples for querying LLMs to boost the GCD model performance. It notably avoids selecting outliers with high model uncertainty but which are less beneficial for model learning.",
|
| 1060 |
+
"bbox": [
|
| 1061 |
+
507,
|
| 1062 |
+
152,
|
| 1063 |
+
882,
|
| 1064 |
+
361
|
| 1065 |
+
],
|
| 1066 |
+
"page_idx": 5
|
| 1067 |
+
},
|
| 1068 |
+
{
|
| 1069 |
+
"type": "text",
|
| 1070 |
+
"text": "4.5.2 Effect of Soft Feedback Propagation",
|
| 1071 |
+
"text_level": 1,
|
| 1072 |
+
"bbox": [
|
| 1073 |
+
507,
|
| 1074 |
+
384,
|
| 1075 |
+
853,
|
| 1076 |
+
400
|
| 1077 |
+
],
|
| 1078 |
+
"page_idx": 5
|
| 1079 |
+
},
|
| 1080 |
+
{
|
| 1081 |
+
"type": "text",
|
| 1082 |
+
"text": "We also explore the contribution of the SFP mechanism by comparing the model performance when omitting the feedback propagation from LLMs in Equation (8) with the standard ALUP. Table 2 illustrates a notable decline in model performance in the absence of SFP, with a decrease of $1.88\\%$ in ACC, $1.67\\%$ in ARI, and $0.38\\%$ in NMI. Nevertheless, ALUP w/o SFP still slightly outperforms the best-performing baseline CsePL. We suggest that this observation can be explained by two main points: (1) The acquisition of supervision signals from LLMs for the informative samples is beneficial for enhancing the model's capacity to discover new categories. (2) The SFP strategy can effectively propagate the accurate supervision signals from LLMs, amplifying the utility of LLM's feedback while concurrently minimizing the risk of propagating errors.",
|
| 1083 |
+
"bbox": [
|
| 1084 |
+
507,
|
| 1085 |
+
411,
|
| 1086 |
+
882,
|
| 1087 |
+
701
|
| 1088 |
+
],
|
| 1089 |
+
"page_idx": 5
|
| 1090 |
+
},
|
| 1091 |
+
{
|
| 1092 |
+
"type": "text",
|
| 1093 |
+
"text": "In contrast to the SFP strategy, we also investigate the Hard Propagation strategy within the proposed ALUP (ALUP $w$ HP), where LLMs' feedback is directly extended to the neighboring samples without any control. As presented in Table 2, we can observe that the model performance significantly decreases using the hard propagation, descending even below the levels achieved by CsePL. This is probably due to the propagation of inaccurate supervision signals from LLMs, which introduces considerable noise into the model learning.",
|
| 1094 |
+
"bbox": [
|
| 1095 |
+
507,
|
| 1096 |
+
705,
|
| 1097 |
+
882,
|
| 1098 |
+
883
|
| 1099 |
+
],
|
| 1100 |
+
"page_idx": 5
|
| 1101 |
+
},
|
| 1102 |
+
{
|
| 1103 |
+
"type": "page_number",
|
| 1104 |
+
"text": "6",
|
| 1105 |
+
"bbox": [
|
| 1106 |
+
492,
|
| 1107 |
+
903,
|
| 1108 |
+
505,
|
| 1109 |
+
915
|
| 1110 |
+
],
|
| 1111 |
+
"page_idx": 5
|
| 1112 |
+
},
|
| 1113 |
+
{
|
| 1114 |
+
"type": "footer",
|
| 1115 |
+
"text": "7850",
|
| 1116 |
+
"bbox": [
|
| 1117 |
+
480,
|
| 1118 |
+
927,
|
| 1119 |
+
521,
|
| 1120 |
+
941
|
| 1121 |
+
],
|
| 1122 |
+
"page_idx": 5
|
| 1123 |
+
},
|
| 1124 |
+
{
|
| 1125 |
+
"type": "table",
|
| 1126 |
+
"img_path": "images/a761cdf8e636af4f1d13859c1b362d4400bf8e6dc06bdddfd0513a38f51bfd77.jpg",
|
| 1127 |
+
"table_caption": [],
|
| 1128 |
+
"table_footnote": [],
|
| 1129 |
+
"table_body": "<table><tr><td rowspan=\"2\">KCR</td><td rowspan=\"2\">Methods</td><td colspan=\"3\">BANKING</td><td colspan=\"3\">CLINC</td><td colspan=\"3\">StackOverflow</td></tr><tr><td>ACC</td><td>ARI</td><td>NMI</td><td>ACC</td><td>ARI</td><td>NMI</td><td>ACC</td><td>ARI</td><td>NMI</td></tr><tr><td rowspan=\"9\">25%</td><td>DTC</td><td>31.75</td><td>19.09</td><td>55.59</td><td>56.90</td><td>41.92</td><td>79.35</td><td>29.54</td><td>17.51</td><td>29.96</td></tr><tr><td>CDAC+</td><td>48.00</td><td>33.74</td><td>66.39</td><td>66.24</td><td>50.02</td><td>84.68</td><td>51.61</td><td>30.99</td><td>46.16</td></tr><tr><td>DeepAligned</td><td>49.08</td><td>37.62</td><td>70.50</td><td>74.07</td><td>64.63</td><td>88.97</td><td>54.50</td><td>37.96</td><td>50.86</td></tr><tr><td>ProbNID</td><td>55.75</td><td>44.25</td><td>74.37</td><td>71.56</td><td>63.25</td><td>89.21</td><td>54.10</td><td>38.10</td><td>53.70</td></tr><tr><td>DCSC</td><td>60.15</td><td>49.75</td><td>78.18</td><td>79.89</td><td>72.68</td><td>91.70</td><td>-</td><td>-</td><td>-</td></tr><tr><td>MTP-CLNN</td><td>65.06</td><td>52.91</td><td>80.04</td><td>83.26</td><td>76.20</td><td>93.17</td><td>74.70</td><td>54.80</td><td>73.35</td></tr><tr><td>USNID</td><td>65.85</td><td>56.53</td><td>81.94</td><td>83.12</td><td>77.95</td><td>94.17</td><td>75.76</td><td>65.45</td><td>74.91</td></tr><tr><td>CsePL</td><td>71.06</td><td>60.36</td><td>83.32</td><td>86.16</td><td>79.65</td><td>94.07</td><td>79.47</td><td>64.92</td><td>74.88</td></tr><tr><td>ALUP</td><td>74.61</td><td>62.64</td><td>84.06</td><td>88.40</td><td>82.44</td><td>94.84</td><td>82.20</td><td>64.54</td><td>76.58</td></tr><tr><td rowspan=\"9\">50%</td><td>DTC</td><td>49.85</td><td>37.05</td><td>69.46</td><td>64.39</td><td>50.44</td><td>83.01</td><td>52.92</td><td>37.38</td><td>49.80</td></tr><tr><td>CDAC+</td><td>48.55</td><td>34.97</td><td>67.30</td><td>68.01</td><td>54.87</td><td>86.00</td><td>51.79</td><td>30.88</td><td>46.21</td></tr><tr><td>DeepAligned</td><td>59.38</td><td>47.95</td><td>76.67</td><td>80.70</td><td>72.56</td><td>91.59</td><td>74.52</td><td>57.62</td><td>68.28</td></tr><tr><td>ProbNID</td><td>63.02</td><td>50.42</td><td>77.95</td><td>82.62</td><td>75.27</td><td>92.72</td><td>73.20</td><td>62.46</td><td>74.54</td></tr><tr><td>DCSC</td><td>68.30</td><td>56.94</td><td>81.19</td><td>84.57</td><td>78.82</td><td>93.75</td><td>-</td><td>-</td><td>-</td></tr><tr><td>MTP-CLNN</td><td>70.97</td><td>60.17</td><td>83.42</td><td>86.18</td><td>80.17</td><td>94.30</td><td>80.36</td><td>62.24</td><td>76.66</td></tr><tr><td>USNID</td><td>73.27</td><td>63.77</td><td>85.05</td><td>87.22</td><td>82.87</td><td>95.45</td><td>82.06</td><td>71.63</td><td>78.77</td></tr><tr><td>CsePL</td><td>76.94</td><td>66.66</td><td>85.65</td><td>88.66</td><td>83.14</td><td>95.09</td><td>85.68</td><td>71.99</td><td>80.28</td></tr><tr><td>ALUP</td><td>79.45</td><td>68.78</td><td>86.79</td><td>90.53</td><td>84.84</td><td>95.97</td><td>86.70</td><td>73.85</td><td>81.45</td></tr></table>",
|
| 1130 |
+
"bbox": [
|
| 1131 |
+
121,
|
| 1132 |
+
83,
|
| 1133 |
+
877,
|
| 1134 |
+
395
|
| 1135 |
+
],
|
| 1136 |
+
"page_idx": 6
|
| 1137 |
+
},
|
| 1138 |
+
{
|
| 1139 |
+
"type": "table",
|
| 1140 |
+
"img_path": "images/5b9d9210504e7d8e334fe254a9c5777bf932591fdeb48398abe5b349cd8a5541.jpg",
|
| 1141 |
+
"table_caption": [
|
| 1142 |
+
"Table 1: Main performance results on the generalized category discovery across three public datasets. KCR denotes the known category rate."
|
| 1143 |
+
],
|
| 1144 |
+
"table_footnote": [],
|
| 1145 |
+
"table_body": "<table><tr><td rowspan=\"2\">KCR</td><td rowspan=\"2\">Methods</td><td colspan=\"3\">BANKING</td></tr><tr><td>ACC</td><td>ARI</td><td>NMI</td></tr><tr><td rowspan=\"4\">25%</td><td>ALUP</td><td>74.61</td><td>62.64</td><td>84.06</td></tr><tr><td>- w/o UP</td><td>73.41</td><td>61.30</td><td>83.42</td></tr><tr><td>- w/o SFP</td><td>72.73</td><td>60.97</td><td>83.68</td></tr><tr><td>- w HP</td><td>70.24</td><td>59.08</td><td>82.32</td></tr><tr><td rowspan=\"4\">50%</td><td>ALUP</td><td>79.45</td><td>68.78</td><td>86.79</td></tr><tr><td>- w/o UP</td><td>78.64</td><td>67.16</td><td>86.05</td></tr><tr><td>- w/o SFP</td><td>77.66</td><td>67.04</td><td>86.43</td></tr><tr><td>- w HP</td><td>75.60</td><td>64.33</td><td>84.72</td></tr></table>",
|
| 1146 |
+
"bbox": [
|
| 1147 |
+
141,
|
| 1148 |
+
467,
|
| 1149 |
+
460,
|
| 1150 |
+
626
|
| 1151 |
+
],
|
| 1152 |
+
"page_idx": 6
|
| 1153 |
+
},
|
| 1154 |
+
{
|
| 1155 |
+
"type": "text",
|
| 1156 |
+
"text": "Table 2: Ablation results on the BANKING dataset.",
|
| 1157 |
+
"bbox": [
|
| 1158 |
+
124,
|
| 1159 |
+
636,
|
| 1160 |
+
475,
|
| 1161 |
+
650
|
| 1162 |
+
],
|
| 1163 |
+
"page_idx": 6
|
| 1164 |
+
},
|
| 1165 |
+
{
|
| 1166 |
+
"type": "image",
|
| 1167 |
+
"img_path": "images/61f8e12ad87a9830e5d16a8431f6494fb5b45c85824cb1fab062c4ff5e9ea806.jpg",
|
| 1168 |
+
"image_caption": [
|
| 1169 |
+
"Figure 3: Effect of the number of propagated neighbors."
|
| 1170 |
+
],
|
| 1171 |
+
"image_footnote": [],
|
| 1172 |
+
"bbox": [
|
| 1173 |
+
137,
|
| 1174 |
+
684,
|
| 1175 |
+
465,
|
| 1176 |
+
837
|
| 1177 |
+
],
|
| 1178 |
+
"page_idx": 6
|
| 1179 |
+
},
|
| 1180 |
+
{
|
| 1181 |
+
"type": "text",
|
| 1182 |
+
"text": "4.5.3 Number of Propagated Neighbors",
|
| 1183 |
+
"text_level": 1,
|
| 1184 |
+
"bbox": [
|
| 1185 |
+
507,
|
| 1186 |
+
458,
|
| 1187 |
+
835,
|
| 1188 |
+
474
|
| 1189 |
+
],
|
| 1190 |
+
"page_idx": 6
|
| 1191 |
+
},
|
| 1192 |
+
{
|
| 1193 |
+
"type": "text",
|
| 1194 |
+
"text": "To delve deeper into the effectiveness of the UP strategy within the ALUP framework, we conduct further experiments on the BANKING dataset to explore the effect of varying the number of propagated neighbors in unlabeled sample selection on the model performance. Figure 3 illustrates performance trends across different counts of propagated neighbors. Notably, as the number of propagated neighbors in Equation (3) increases, ALUP's performance improves, reaching an optimum with 25 propagated neighbors. Beyond this point, the model performance begins to decline. We hypothesize that this decrease might be attributed to the inclusion of samples with lower uncertainty, which potentially introduces significant noise into the process of unlabeled sample selection.",
|
| 1195 |
+
"bbox": [
|
| 1196 |
+
507,
|
| 1197 |
+
479,
|
| 1198 |
+
882,
|
| 1199 |
+
736
|
| 1200 |
+
],
|
| 1201 |
+
"page_idx": 6
|
| 1202 |
+
},
|
| 1203 |
+
{
|
| 1204 |
+
"type": "text",
|
| 1205 |
+
"text": "4.5.4 Effect of Representative Samples",
|
| 1206 |
+
"text_level": 1,
|
| 1207 |
+
"bbox": [
|
| 1208 |
+
507,
|
| 1209 |
+
749,
|
| 1210 |
+
826,
|
| 1211 |
+
765
|
| 1212 |
+
],
|
| 1213 |
+
"page_idx": 6
|
| 1214 |
+
},
|
| 1215 |
+
{
|
| 1216 |
+
"type": "text",
|
| 1217 |
+
"text": "To assess the effectiveness of the CP method, we examine the impact of varying the number $p$ of representative samples integrated into the prompt for querying LLMs on the model performance. Experiments are conducted with $p$ values set at {19, 38, 57, 77}, where 19 denotes about a quarter of the total cluster count. As detailed in Table 3, the optimal",
|
| 1218 |
+
"bbox": [
|
| 1219 |
+
507,
|
| 1220 |
+
769,
|
| 1221 |
+
880,
|
| 1222 |
+
882
|
| 1223 |
+
],
|
| 1224 |
+
"page_idx": 6
|
| 1225 |
+
},
|
| 1226 |
+
{
|
| 1227 |
+
"type": "page_number",
|
| 1228 |
+
"text": "7",
|
| 1229 |
+
"bbox": [
|
| 1230 |
+
492,
|
| 1231 |
+
903,
|
| 1232 |
+
504,
|
| 1233 |
+
915
|
| 1234 |
+
],
|
| 1235 |
+
"page_idx": 6
|
| 1236 |
+
},
|
| 1237 |
+
{
|
| 1238 |
+
"type": "footer",
|
| 1239 |
+
"text": "7851",
|
| 1240 |
+
"bbox": [
|
| 1241 |
+
480,
|
| 1242 |
+
927,
|
| 1243 |
+
517,
|
| 1244 |
+
940
|
| 1245 |
+
],
|
| 1246 |
+
"page_idx": 6
|
| 1247 |
+
},
|
| 1248 |
+
{
|
| 1249 |
+
"type": "text",
|
| 1250 |
+
"text": "GCD performance is achieved by integrating 38 representative samples into the prompt to acquire supervision signals from LLMs for the unlabeled samples. We suggest that the main reasons for this observation come from two aspects: (1) A smaller $p$ may potentially omit the representative samples sharing the same underlying category as the selected samples, possibly limiting LLMs' ability to offer the requisite supervision signals during comparisons with the integrated representative samples. (2) Conversely, incorporating a larger number of representative samples for the CP method results in an extended prompt length. This could lead LLMs to misclassify the chosen unlabeled samples into inaccurate categories, thereby negatively affecting the model's performance.",
|
| 1251 |
+
"bbox": [
|
| 1252 |
+
114,
|
| 1253 |
+
85,
|
| 1254 |
+
489,
|
| 1255 |
+
343
|
| 1256 |
+
],
|
| 1257 |
+
"page_idx": 7
|
| 1258 |
+
},
|
| 1259 |
+
{
|
| 1260 |
+
"type": "text",
|
| 1261 |
+
"text": "In our standard approach to the CP method, we select the single closest-to-center sample within each cluster as the representative for constructing prompts to query LLMs. Expanding our investigation into the CP method, we experiment with an alternative strategy involving a close-to-center set—specifically, the top 3 samples nearest to the cluster center—to represent distinct clusters for prompting LLMs to determine pseudo category labels. As illustrated in Table 4, the experimental results on BANKING-25% demonstrate marginal gains with this strategy, achieving an increase of no more than 0.5% across all three metrics. Nonetheless, it necessitates an increased querying cost with LLMs. Balancing the slight improvement in performance against the rise in costs, we thus opt for the more straightforward and cost-effective strategy of utilizing single closest-to-center samples within the CP method.",
|
| 1262 |
+
"bbox": [
|
| 1263 |
+
114,
|
| 1264 |
+
344,
|
| 1265 |
+
489,
|
| 1266 |
+
650
|
| 1267 |
+
],
|
| 1268 |
+
"page_idx": 7
|
| 1269 |
+
},
|
| 1270 |
+
{
|
| 1271 |
+
"type": "text",
|
| 1272 |
+
"text": "4.6 Impact of Different Base GCD Models",
|
| 1273 |
+
"text_level": 1,
|
| 1274 |
+
"bbox": [
|
| 1275 |
+
115,
|
| 1276 |
+
665,
|
| 1277 |
+
463,
|
| 1278 |
+
682
|
| 1279 |
+
],
|
| 1280 |
+
"page_idx": 7
|
| 1281 |
+
},
|
| 1282 |
+
{
|
| 1283 |
+
"type": "text",
|
| 1284 |
+
"text": "In our experiments, we select the most informative unlabeled samples based on the existing GCD models. To validate the effectiveness of the proposed ALUP, we also examine how its performance varies when different GCD models are integrated within ALUP on the BANKING- $50\\%$ dataset. As depicted in Figure 4, we can observe consistent and significant improvements with the proposed ALUP. This demonstrates that the proposed ALUP framework is effective in acquiring supervision signals from LLMs to enhance the model performance of discovering new categories and is adaptable to",
|
| 1285 |
+
"bbox": [
|
| 1286 |
+
114,
|
| 1287 |
+
689,
|
| 1288 |
+
489,
|
| 1289 |
+
883
|
| 1290 |
+
],
|
| 1291 |
+
"page_idx": 7
|
| 1292 |
+
},
|
| 1293 |
+
{
|
| 1294 |
+
"type": "table",
|
| 1295 |
+
"img_path": "images/f313d76129f509c36fd52b4e5e28dc840ca3b2b6adcd7612f29464c5b2fe7dfd.jpg",
|
| 1296 |
+
"table_caption": [],
|
| 1297 |
+
"table_footnote": [],
|
| 1298 |
+
"table_body": "<table><tr><td rowspan=\"2\">KCR</td><td rowspan=\"2\">p</td><td colspan=\"3\">BANKING</td></tr><tr><td>ACC</td><td>ARI</td><td>NMI</td></tr><tr><td rowspan=\"4\">25%</td><td>19</td><td>73.70</td><td>61.40</td><td>83.58</td></tr><tr><td>38</td><td>74.61</td><td>62.64</td><td>84.06</td></tr><tr><td>57</td><td>72.01</td><td>60.86</td><td>83.04</td></tr><tr><td>77</td><td>71.36</td><td>59.58</td><td>82.64</td></tr><tr><td rowspan=\"4\">50%</td><td>19</td><td>78.44</td><td>67.46</td><td>86.25</td></tr><tr><td>38</td><td>79.45</td><td>68.78</td><td>86.79</td></tr><tr><td>57</td><td>77.56</td><td>66.04</td><td>85.93</td></tr><tr><td>77</td><td>76.66</td><td>65.23</td><td>85.59</td></tr></table>",
|
| 1299 |
+
"bbox": [
|
| 1300 |
+
542,
|
| 1301 |
+
83,
|
| 1302 |
+
850,
|
| 1303 |
+
261
|
| 1304 |
+
],
|
| 1305 |
+
"page_idx": 7
|
| 1306 |
+
},
|
| 1307 |
+
{
|
| 1308 |
+
"type": "table",
|
| 1309 |
+
"img_path": "images/89757a439666ef58440ad25da54f45f09756931ca22b4ab6fa0374d197cea546.jpg",
|
| 1310 |
+
"table_caption": [
|
| 1311 |
+
"Table 3: Effect of the number of representative samples within the CP method."
|
| 1312 |
+
],
|
| 1313 |
+
"table_footnote": [],
|
| 1314 |
+
"table_body": "<table><tr><td rowspan=\"2\">Methods</td><td colspan=\"3\">BANKING</td></tr><tr><td>ACC</td><td>ARI</td><td>NMI</td></tr><tr><td>ALUP-standard</td><td>74.61</td><td>62.64</td><td>84.06</td></tr><tr><td>ALUP-close-to-center set</td><td>74.87</td><td>63.07</td><td>84.39</td></tr></table>",
|
| 1315 |
+
"bbox": [
|
| 1316 |
+
514,
|
| 1317 |
+
316,
|
| 1318 |
+
878,
|
| 1319 |
+
390
|
| 1320 |
+
],
|
| 1321 |
+
"page_idx": 7
|
| 1322 |
+
},
|
| 1323 |
+
{
|
| 1324 |
+
"type": "text",
|
| 1325 |
+
"text": "Table 4: Performance of representative sample selection strategies within the CP method on BANKING-25%.",
|
| 1326 |
+
"bbox": [
|
| 1327 |
+
507,
|
| 1328 |
+
399,
|
| 1329 |
+
880,
|
| 1330 |
+
429
|
| 1331 |
+
],
|
| 1332 |
+
"page_idx": 7
|
| 1333 |
+
},
|
| 1334 |
+
{
|
| 1335 |
+
"type": "image",
|
| 1336 |
+
"img_path": "images/fa1c7ae124fea584fe154b16df5a3157823c15db6f870447e284e080a96ed24e.jpg",
|
| 1337 |
+
"image_caption": [
|
| 1338 |
+
"Figure 4: Performances of various base GCD models in ALUP on the BANKING-50%."
|
| 1339 |
+
],
|
| 1340 |
+
"image_footnote": [],
|
| 1341 |
+
"bbox": [
|
| 1342 |
+
524,
|
| 1343 |
+
449,
|
| 1344 |
+
867,
|
| 1345 |
+
608
|
| 1346 |
+
],
|
| 1347 |
+
"page_idx": 7
|
| 1348 |
+
},
|
| 1349 |
+
{
|
| 1350 |
+
"type": "text",
|
| 1351 |
+
"text": "other GCD models.",
|
| 1352 |
+
"bbox": [
|
| 1353 |
+
507,
|
| 1354 |
+
678,
|
| 1355 |
+
658,
|
| 1356 |
+
693
|
| 1357 |
+
],
|
| 1358 |
+
"page_idx": 7
|
| 1359 |
+
},
|
| 1360 |
+
{
|
| 1361 |
+
"type": "text",
|
| 1362 |
+
"text": "4.7 Influence of Query Sample Number",
|
| 1363 |
+
"text_level": 1,
|
| 1364 |
+
"bbox": [
|
| 1365 |
+
507,
|
| 1366 |
+
712,
|
| 1367 |
+
836,
|
| 1368 |
+
728
|
| 1369 |
+
],
|
| 1370 |
+
"page_idx": 7
|
| 1371 |
+
},
|
| 1372 |
+
{
|
| 1373 |
+
"type": "text",
|
| 1374 |
+
"text": "We study the effect of varying the number of selected unlabeled samples for querying LLMs in Figure 5. It is observed that there is an increase in model performance corresponding to the rise in the number of samples selected for querying LLMs. Yet, this growth rate progressively diminishes as the LLMs' feedback is propagated, and selecting informative samples becomes more challenging with the increasing number of selected samples.",
|
| 1375 |
+
"bbox": [
|
| 1376 |
+
505,
|
| 1377 |
+
737,
|
| 1378 |
+
882,
|
| 1379 |
+
883
|
| 1380 |
+
],
|
| 1381 |
+
"page_idx": 7
|
| 1382 |
+
},
|
| 1383 |
+
{
|
| 1384 |
+
"type": "page_number",
|
| 1385 |
+
"text": "8",
|
| 1386 |
+
"bbox": [
|
| 1387 |
+
492,
|
| 1388 |
+
903,
|
| 1389 |
+
505,
|
| 1390 |
+
915
|
| 1391 |
+
],
|
| 1392 |
+
"page_idx": 7
|
| 1393 |
+
},
|
| 1394 |
+
{
|
| 1395 |
+
"type": "footer",
|
| 1396 |
+
"text": "7852",
|
| 1397 |
+
"bbox": [
|
| 1398 |
+
480,
|
| 1399 |
+
927,
|
| 1400 |
+
521,
|
| 1401 |
+
941
|
| 1402 |
+
],
|
| 1403 |
+
"page_idx": 7
|
| 1404 |
+
},
|
| 1405 |
+
{
|
| 1406 |
+
"type": "image",
|
| 1407 |
+
"img_path": "images/eaf9e767253098e6262e1859535289d5b3b5c5fcf15e8f1e33af7473795afa5c.jpg",
|
| 1408 |
+
"image_caption": [
|
| 1409 |
+
"Figure 5: Effect of the number of query samples on BANKING- $50\\%$ ."
|
| 1410 |
+
],
|
| 1411 |
+
"image_footnote": [],
|
| 1412 |
+
"bbox": [
|
| 1413 |
+
132,
|
| 1414 |
+
86,
|
| 1415 |
+
470,
|
| 1416 |
+
242
|
| 1417 |
+
],
|
| 1418 |
+
"page_idx": 8
|
| 1419 |
+
},
|
| 1420 |
+
{
|
| 1421 |
+
"type": "table",
|
| 1422 |
+
"img_path": "images/e4ea5010bb36dd4337436ae81f623ef73424085569a7562a51c34a2fa0222e11.jpg",
|
| 1423 |
+
"table_caption": [],
|
| 1424 |
+
"table_footnote": [],
|
| 1425 |
+
"table_body": "<table><tr><td rowspan=\"2\">Methods</td><td colspan=\"3\">BANKING</td></tr><tr><td>ACC</td><td>ARI</td><td>NMI</td></tr><tr><td>ALUP-gpt-3.5-turbo</td><td>74.61</td><td>62.64</td><td>84.06</td></tr><tr><td>ALUP-FlanT5-XXL</td><td>73.38</td><td>62.29</td><td>83.76</td></tr><tr><td>ALUP-text-embedding</td><td>71.95</td><td>61.48</td><td>83.7</td></tr></table>",
|
| 1426 |
+
"bbox": [
|
| 1427 |
+
121,
|
| 1428 |
+
299,
|
| 1429 |
+
482,
|
| 1430 |
+
390
|
| 1431 |
+
],
|
| 1432 |
+
"page_idx": 8
|
| 1433 |
+
},
|
| 1434 |
+
{
|
| 1435 |
+
"type": "text",
|
| 1436 |
+
"text": "Table 5: Effect of different LLMs.",
|
| 1437 |
+
"bbox": [
|
| 1438 |
+
183,
|
| 1439 |
+
399,
|
| 1440 |
+
416,
|
| 1441 |
+
413
|
| 1442 |
+
],
|
| 1443 |
+
"page_idx": 8
|
| 1444 |
+
},
|
| 1445 |
+
{
|
| 1446 |
+
"type": "text",
|
| 1447 |
+
"text": "4.8 Effect of Different LLMs",
|
| 1448 |
+
"text_level": 1,
|
| 1449 |
+
"bbox": [
|
| 1450 |
+
115,
|
| 1451 |
+
441,
|
| 1452 |
+
357,
|
| 1453 |
+
455
|
| 1454 |
+
],
|
| 1455 |
+
"page_idx": 8
|
| 1456 |
+
},
|
| 1457 |
+
{
|
| 1458 |
+
"type": "text",
|
| 1459 |
+
"text": "We also examine the performance impact of utilizing different LLMs within our ALUP. Specifically, we conduct experiments on BANKING-25%, comparing the performance of the closed-source gpt-3.5-turbo against the open-sourced FlanT5-XXL in deriving supervision signals. As shown in Table 5, the experimental results illustrate a marginal performance decrease when employing FlanT5-XXL compared to gpt-3.5-turbo. Despite this, the use of FlanT5-XXL still markedly outperforms the best-performing baseline CsePL, highlighting the adaptability of our ALUP to various LLMs.",
|
| 1460 |
+
"bbox": [
|
| 1461 |
+
114,
|
| 1462 |
+
463,
|
| 1463 |
+
487,
|
| 1464 |
+
655
|
| 1465 |
+
],
|
| 1466 |
+
"page_idx": 8
|
| 1467 |
+
},
|
| 1468 |
+
{
|
| 1469 |
+
"type": "text",
|
| 1470 |
+
"text": "Furthering our exploration into the mechanisms of LLMs' utilization in GCD, we evaluate the efficacy of our CP method against an alternative approach based on embedding similarity scores. For this comparison, we leverage the embedding model text-embedding-3-small from OpenAI to generate embeddings for both uncertain samples and cluster-representative samples, calculating their similarity scores to determine pseudo category labels. As reported in Table 5, the results demonstrate a drop in performance metrics using the embedding score method, underscoring the rationale of our CP method and its proficiency in capturing the nuanced semantic relationships essential for GCD.",
|
| 1471 |
+
"bbox": [
|
| 1472 |
+
114,
|
| 1473 |
+
657,
|
| 1474 |
+
489,
|
| 1475 |
+
881
|
| 1476 |
+
],
|
| 1477 |
+
"page_idx": 8
|
| 1478 |
+
},
|
| 1479 |
+
{
|
| 1480 |
+
"type": "text",
|
| 1481 |
+
"text": "5 Conclusion",
|
| 1482 |
+
"text_level": 1,
|
| 1483 |
+
"bbox": [
|
| 1484 |
+
509,
|
| 1485 |
+
84,
|
| 1486 |
+
640,
|
| 1487 |
+
99
|
| 1488 |
+
],
|
| 1489 |
+
"page_idx": 8
|
| 1490 |
+
},
|
| 1491 |
+
{
|
| 1492 |
+
"type": "text",
|
| 1493 |
+
"text": "In summary, our ALUP framework innovatively integrates Large Language Models with uncertainty propagation in generalized category discovery, marking a significant leap in the field. By employing comparison-based LLM prompting and a novel soft feedback propagation mechanism, ALUP adeptly identifies and categorizes new data with enhanced accuracy and efficiency. This approach not only surpasses traditional GCD methods but also minimizes the risk of error propagation, a critical advancement in handling real-world, dynamic datasets with LLMs. Future endeavors will focus on refining LLM integration, extending our methods to multi-modal data, and enhancing scalability and data privacy measures, furthering ALUP's potential in diverse and evolving open-world computing.",
|
| 1494 |
+
"bbox": [
|
| 1495 |
+
507,
|
| 1496 |
+
110,
|
| 1497 |
+
882,
|
| 1498 |
+
384
|
| 1499 |
+
],
|
| 1500 |
+
"page_idx": 8
|
| 1501 |
+
},
|
| 1502 |
+
{
|
| 1503 |
+
"type": "text",
|
| 1504 |
+
"text": "Acknowledgments",
|
| 1505 |
+
"text_level": 1,
|
| 1506 |
+
"bbox": [
|
| 1507 |
+
509,
|
| 1508 |
+
395,
|
| 1509 |
+
672,
|
| 1510 |
+
412
|
| 1511 |
+
],
|
| 1512 |
+
"page_idx": 8
|
| 1513 |
+
},
|
| 1514 |
+
{
|
| 1515 |
+
"type": "text",
|
| 1516 |
+
"text": "This research is supported by the Ministry of Education, Singapore, under its AcRF Tier 2 Funding (Proposal ID: T2EP20123-0052). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of the Ministry of Education, Singapore.",
|
| 1517 |
+
"bbox": [
|
| 1518 |
+
507,
|
| 1519 |
+
420,
|
| 1520 |
+
882,
|
| 1521 |
+
533
|
| 1522 |
+
],
|
| 1523 |
+
"page_idx": 8
|
| 1524 |
+
},
|
| 1525 |
+
{
|
| 1526 |
+
"type": "text",
|
| 1527 |
+
"text": "Limitations",
|
| 1528 |
+
"text_level": 1,
|
| 1529 |
+
"bbox": [
|
| 1530 |
+
509,
|
| 1531 |
+
545,
|
| 1532 |
+
613,
|
| 1533 |
+
560
|
| 1534 |
+
],
|
| 1535 |
+
"page_idx": 8
|
| 1536 |
+
},
|
| 1537 |
+
{
|
| 1538 |
+
"type": "text",
|
| 1539 |
+
"text": "While our ALUP framework marks a significant advance in Generalized Category Discovery using LLMs, it does have some limitations. The reliance on LLMs can introduce biases and inaccuracies, particularly in areas where these models have limited training data or exposure. Although our propagation method effectively reduces overall costs, the initial computational demands of LLMs may still pose scalability challenges, especially for resource-limited environments. Additionally, the framework currently focuses on textual data, which could limit its applicability in multi-modal data scenarios. Moreover, while our soft feedback propagation mechanism aims to minimize error spread, it is not immune to the risk of amplifying initial inaccuracies from LLM feedback. Finally, data privacy and security remain critical concerns in the use of external LLMs, necessitating ongoing vigilance and adaptation.",
|
| 1540 |
+
"bbox": [
|
| 1541 |
+
507,
|
| 1542 |
+
570,
|
| 1543 |
+
882,
|
| 1544 |
+
876
|
| 1545 |
+
],
|
| 1546 |
+
"page_idx": 8
|
| 1547 |
+
},
|
| 1548 |
+
{
|
| 1549 |
+
"type": "page_number",
|
| 1550 |
+
"text": "9",
|
| 1551 |
+
"bbox": [
|
| 1552 |
+
490,
|
| 1553 |
+
902,
|
| 1554 |
+
505,
|
| 1555 |
+
915
|
| 1556 |
+
],
|
| 1557 |
+
"page_idx": 8
|
| 1558 |
+
},
|
| 1559 |
+
{
|
| 1560 |
+
"type": "footer",
|
| 1561 |
+
"text": "7853",
|
| 1562 |
+
"bbox": [
|
| 1563 |
+
480,
|
| 1564 |
+
927,
|
| 1565 |
+
519,
|
| 1566 |
+
940
|
| 1567 |
+
],
|
| 1568 |
+
"page_idx": 8
|
| 1569 |
+
},
|
| 1570 |
+
{
|
| 1571 |
+
"type": "text",
|
| 1572 |
+
"text": "References",
|
| 1573 |
+
"text_level": 1,
|
| 1574 |
+
"bbox": [
|
| 1575 |
+
117,
|
| 1576 |
+
85,
|
| 1577 |
+
215,
|
| 1578 |
+
99
|
| 1579 |
+
],
|
| 1580 |
+
"page_idx": 9
|
| 1581 |
+
},
|
| 1582 |
+
{
|
| 1583 |
+
"type": "list",
|
| 1584 |
+
"sub_type": "ref_text",
|
| 1585 |
+
"list_items": [
|
| 1586 |
+
"Wenbin An, Feng Tian, Qinghua Zheng, Wei Ding, Qianying Wang, and Ping Chen. 2023. Generalized category discovery with decoupled prototypical network. In AAAI, pages 12527-12535.",
|
| 1587 |
+
"Mathilde Caron, Piotr Bojanowski, Armand Joulin, and Matthijs Douze. 2018. Deep clustering for unsupervised learning of visual features. In ECCV, pages 139-156.",
|
| 1588 |
+
"Inigo Casanueva, Tadas Temčinas, Daniela Gerz, Matthew Henderson, and Ivan Vulić. 2020. Efficient intent detection with dual sentence encoders. In NLP4ConvAI@ACL, pages 38-45.",
|
| 1589 |
+
"Qinyuan Cheng, Xiaogui Yang, Tianxiang Sun, Linyang Li, and Xipeng Qiu. 2023. Improving contrastive learning of sentence embeddings from AI feedback. In Findings of ACL, pages 11122-11138.",
|
| 1590 |
+
"Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan First, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. 2023. Palm: Scaling language modeling with pathways. J. Mach. Learn. Res., pages 240:1-240:113.",
|
| 1591 |
+
"Gui Citovsky, Giulia DeSalvo, Claudio Gentile, Lazaros Karydas, Anand Rajagopalan, Afshin Rostamizadeh, and Sanjiv Kumar. 2021. Batch active learning at scale. In NeurIPS, pages 11933-11944.",
|
| 1592 |
+
"Marco Cuturi. 2013. Sinkhorn distances: Lightspeed computation of optimal transport. In NeurIPS, pages 2292-2300.",
|
| 1593 |
+
"Maarten De Raedt, Frédéric Godin, Thomas Demeester, and Chris Develder. 2023. IDAS: Intent discovery with abstractive summarization. In NLP4ConvAI@ACL, pages 71-88.",
|
| 1594 |
+
"Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. Simcse: Simple contrastive learning of sentence embeddings. In EMNLP, pages 6894-6910."
|
| 1595 |
+
],
|
| 1596 |
+
"bbox": [
|
| 1597 |
+
117,
|
| 1598 |
+
107,
|
| 1599 |
+
487,
|
| 1600 |
+
881
|
| 1601 |
+
],
|
| 1602 |
+
"page_idx": 9
|
| 1603 |
+
},
|
| 1604 |
+
{
|
| 1605 |
+
"type": "list",
|
| 1606 |
+
"sub_type": "ref_text",
|
| 1607 |
+
"list_items": [
|
| 1608 |
+
"Daniel Gissin and Shai Shalev-Shwartz. 2019. Discriminative active learning. CoRR, abs/1907.06347.",
|
| 1609 |
+
"Shasha Guo, Jing Zhang, Yanling Wang, Qianyi Zhang, Cuiping Li, and Hong Chen. 2022. DSM: Question generation over knowledge base via modeling diverse subgraphs with meta-learner. In EMNLP, pages 4194-4207.",
|
| 1610 |
+
"Amir Hadifar, Lucas Sterckx, Thomas Demeester, and Chris Develder. 2019. A self-training approach for short text clustering. In RepL4NLP@ACL, pages 194-199.",
|
| 1611 |
+
"Kai Han, Andrea Vedaldi, and Andrew Zisserman. 2019. Learning to discover novel visual categories via deep transfer clustering. In ICCV, pages 8400-8408.",
|
| 1612 |
+
"Yen-Chang Hsu, Zhaoyang Lv, and Zsolt Kira. 2018. Learning to cluster in order to transfer across domains and tasks. In ICLR.",
|
| 1613 |
+
"Yen-Chang Hsu, Zhaoyang Lv, Joel Schlosser, Phillip Odom, and Zsolt Kira. 2019. Multi-class classification without multi-class labels. In ICLR.",
|
| 1614 |
+
"Siddharth Karamcheti, Ranjay Krishna, Li Fei-Fei, and Christopher Manning. 2021. Mind your outliers! investigating the negative impact of outliers on active learning for visual question answering. In ACLIJCNLP, pages 7265-7281.",
|
| 1615 |
+
"Stefan Larson, Anish Mahendran, Joseph J. Peper, Christopher Clarke, Andrew Lee, Parker Hill, Jonathan K. Kummerfeld, Kevin Leach, Michael A. Laurenzano, Lingjia Tang, and Jason Mars. 2019. An evaluation dataset for intent classification and out-of-scope prediction. In EMNLP-IJCNLP, pages 1311-1316.",
|
| 1616 |
+
"David D. Lewis and William A. Gale. 1994. A sequential algorithm for training text classifiers. In SIGIR, pages 3-12.",
|
| 1617 |
+
"Jinggui Liang and Lizi Liao. 2023. ClusterPrompt: Cluster semantic enhanced prompt learning for new intent discovery. In Findings of EMNLP, pages 10468-10481.",
|
| 1618 |
+
"Lizi Liao, Grace Hui Yang, and Chirag Shah. 2023. Proactive conversational agents in the post-chatgpt world. In SIGIR, pages 3452-3455.",
|
| 1619 |
+
"Ting-En Lin, Hua Xu, and Hanlei Zhang. 2020. Discovering new intents via constrained deep adaptive clustering with cluster refinement. In AAAI, pages 8360-8367.",
|
| 1620 |
+
"Ming Liu, Wray L. Buntine, and Gholamreza Haffari. 2018. Learning how to actively learn: A deep imitation learning approach. In ACL, pages 1874-1883."
|
| 1621 |
+
],
|
| 1622 |
+
"bbox": [
|
| 1623 |
+
510,
|
| 1624 |
+
86,
|
| 1625 |
+
880,
|
| 1626 |
+
881
|
| 1627 |
+
],
|
| 1628 |
+
"page_idx": 9
|
| 1629 |
+
},
|
| 1630 |
+
{
|
| 1631 |
+
"type": "page_number",
|
| 1632 |
+
"text": "10",
|
| 1633 |
+
"bbox": [
|
| 1634 |
+
489,
|
| 1635 |
+
903,
|
| 1636 |
+
509,
|
| 1637 |
+
914
|
| 1638 |
+
],
|
| 1639 |
+
"page_idx": 9
|
| 1640 |
+
},
|
| 1641 |
+
{
|
| 1642 |
+
"type": "footer",
|
| 1643 |
+
"text": "7854",
|
| 1644 |
+
"bbox": [
|
| 1645 |
+
480,
|
| 1646 |
+
927,
|
| 1647 |
+
519,
|
| 1648 |
+
940
|
| 1649 |
+
],
|
| 1650 |
+
"page_idx": 9
|
| 1651 |
+
},
|
| 1652 |
+
{
|
| 1653 |
+
"type": "list",
|
| 1654 |
+
"sub_type": "ref_text",
|
| 1655 |
+
"list_items": [
|
| 1656 |
+
"Yixin Liu, Alexander R. Fabbri, Pengfei Liu, Dragomir Radev, and Arman Cohan. 2023. On learning to summarize with large language models as references. CoRR, abs/2305.14239.",
|
| 1657 |
+
"James MacQueen et al. 1967. Some methods for classification and analysis of multivariate observations. In Proceedings of the fifth Berkeley symposium on mathematical statistics and probability, volume 1, pages 281-297.",
|
| 1658 |
+
"Katerina Margatina, Timo Schick, Nikolaos Aletras, and Jane Dwivedi-Yu. 2023. Active learning principles for in-context learning with large language models. In Findings of EMNLP, pages 5011-5034.",
|
| 1659 |
+
"Yutao Mou, Keqing He, Pei Wang, Yanan Wu, Jingang Wang, Wei Wu, and Weiran Xu. 2022. Watch the neighbors: A unified k-nearest neighbor contrastive learning framework for OOD intent discovery. In EMNLP, pages 1517-1529.",
|
| 1660 |
+
"Yutao Mou, Xiaoshuai Song, Keqing He, Chen Zeng, Pei Wang, Jingang Wang, Yunsen Xian, and Weiran Xu. 2023. Decoupling pseudo label disambiguation and representation learning for generalized intent discovery. In ACL, pages 9661-9675.",
|
| 1661 |
+
"OpenAI. 2023. GPT-4 technical report. CoRR, abs/2303.08774.",
|
| 1662 |
+
"Padmasundari and Srinivas Bangalore. 2018. Intent discovery through unsupervised semantic text clustering. In INTERSPEECH, pages 606-610.",
|
| 1663 |
+
"Pengzhen Ren, Yun Xiao, Xiaojun Chang, Po-Yao Huang, Zhihui Li, Brij B. Gupta, Xiaojiang Chen, and Xin Wang. 2022. A survey of deep active learning. ACM Comput. Surv., pages 180:1-180:40.",
|
| 1664 |
+
"Bernhard Schölkopf, Kah Kay Sung, Christopher J. C. Burges, Federico Girosi, Partha Niyogi, Tomaso A. Poggio, and Vladimir Vapnik. 1997. Comparing support vector machines with gaussian kernels to radial basis function classifiers. IEEE Trans. Signal Process., pages 2758-2765.",
|
| 1665 |
+
"Christopher Schröder, Andreas Niekler, and Martin Potthast. 2022. Revisiting uncertainty-based query strategies for active learning with transformers. In *Findings of ACL*, pages 2194–2203.",
|
| 1666 |
+
"Ozan Sener and Silvio Savarese. 2018. Active learning for convolutional neural networks: A core-set approach. In *ICLR*.",
|
| 1667 |
+
"Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurélien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023. Llama: Open and efficient foundation language models. CoRR, abs/2302.13971."
|
| 1668 |
+
],
|
| 1669 |
+
"bbox": [
|
| 1670 |
+
117,
|
| 1671 |
+
86,
|
| 1672 |
+
487,
|
| 1673 |
+
879
|
| 1674 |
+
],
|
| 1675 |
+
"page_idx": 10
|
| 1676 |
+
},
|
| 1677 |
+
{
|
| 1678 |
+
"type": "list",
|
| 1679 |
+
"sub_type": "ref_text",
|
| 1680 |
+
"list_items": [
|
| 1681 |
+
"Sagar Vaze, Kai Han, Andrea Vedaldi, and Andrew Zisserman. 2022. Generalized category discovery. In CVPR, pages 7482-7491.",
|
| 1682 |
+
"Vijay Viswanathan, Kiril Gashteovski, Carolin Lawrence, Tongshuang Wu, and Graham Neubig. 2023. Large language models enable few-shot clustering.",
|
| 1683 |
+
"Dan Wang and Yi Shang. 2014. A new active labeling method for deep learning. In IJCNN, pages 112-119.",
|
| 1684 |
+
"Feng Wei, Zhenbo Chen, Zhenghong Hao, Fengxin Yang, Hua Wei, Bing Han, and Sheng Guo. 2022. Semi-supervised clustering with contrastive learning for discovering new intents. arXiv preprint arXiv:2201.07604.",
|
| 1685 |
+
"Yuxia Wu, Tianhao Dai, Zhedong Zheng, and Lizi Liao. 2024. Active discovering new slots for task-oriented conversation. TASLP, pages 1-11.",
|
| 1686 |
+
"Yuxia Wu, Lizi Liao, Xueming Qian, and Tat-Seng Chua. 2022. Semi-supervised new slot discovery with incremental clustering. In EMNLP Findings, pages 6207-6218.",
|
| 1687 |
+
"Ruixuan Xiao, Yiwen Dong, Junbo Zhao, Runze Wu, Minmin Lin, Gang Chen, and Haobo Wang. 2023. Freearl: Towards human-free active learning in the era of large language models. In EMNLP, pages 14520-14535.",
|
| 1688 |
+
"Junyuan Xie, Ross B. Girshick, and Ali Farhadi. 2016. Unsupervised deep embedding for clustering analysis. In ICML, pages 478-487.",
|
| 1689 |
+
"Jiaming Xu, Peng Wang, Guanhua Tian, Bo Xu, Jun Zhao, Fangyuan Wang, and Hongwei Hao. 2015. Short text clustering via convolutional neural networks. In VS@HLT-NAACL, pages 62-69.",
|
| 1690 |
+
"Bo Yang, Xiao Fu, Nicholas D. Sidiropoulos, and Mingyi Hong. 2017. Towards k-means-friendly spaces: Simultaneous deep learning and clustering. In ICML, pages 3861-3870.",
|
| 1691 |
+
"Jingkang Yang, Kaiyang Zhou, Yixuan Li, and Ziwei Liu. 2021. Generalized out-of-distribution detection: A survey. CoRR, abs/2110.11334.",
|
| 1692 |
+
"Yue Yu, Rongzhi Zhang, Ran Xu, Jieyu Zhang, Jiaming Shen, and Chao Zhang. 2023. Cold-start data selection for better few-shot language model fine-tuning: A prompt-based uncertainty propagation approach. In ACL, pages 2499-2521.",
|
| 1693 |
+
"Weihao Zeng, Keqing He, Zechen Wang, Dayuan Fu, Guanting Dong, Ruotong Geng, Pei Wang, Jingang Wang, Chaobo Sun, Wei Wu, and Weiran Xu. 2022. Semi-supervised knowledge-grounded pre-training for task-oriented dialog systems. In SereTOD, pages 39-47."
|
| 1694 |
+
],
|
| 1695 |
+
"bbox": [
|
| 1696 |
+
510,
|
| 1697 |
+
86,
|
| 1698 |
+
880,
|
| 1699 |
+
879
|
| 1700 |
+
],
|
| 1701 |
+
"page_idx": 10
|
| 1702 |
+
},
|
| 1703 |
+
{
|
| 1704 |
+
"type": "page_number",
|
| 1705 |
+
"text": "11",
|
| 1706 |
+
"bbox": [
|
| 1707 |
+
489,
|
| 1708 |
+
903,
|
| 1709 |
+
507,
|
| 1710 |
+
914
|
| 1711 |
+
],
|
| 1712 |
+
"page_idx": 10
|
| 1713 |
+
},
|
| 1714 |
+
{
|
| 1715 |
+
"type": "footer",
|
| 1716 |
+
"text": "7855",
|
| 1717 |
+
"bbox": [
|
| 1718 |
+
480,
|
| 1719 |
+
927,
|
| 1720 |
+
519,
|
| 1721 |
+
940
|
| 1722 |
+
],
|
| 1723 |
+
"page_idx": 10
|
| 1724 |
+
},
|
| 1725 |
+
{
|
| 1726 |
+
"type": "list",
|
| 1727 |
+
"sub_type": "ref_text",
|
| 1728 |
+
"list_items": [
|
| 1729 |
+
"Xueying Zhan, Qingzhong Wang, Kuan-Hao Huang, Haoyi Xiong, Dejing Dou, and Antoni B. Chan. 2022. A comparative survey of deep active learning. CoRR, abs/2203.13450.",
|
| 1730 |
+
"Hanlei Zhang, Xiaoteng Li, Hua Xu, Panpan Zhang, Kang Zhao, and Kai Gao. 2021a. TEXTOIR: An integrated and visualized platform for text open intent recognition. In ACL-IJCNLP, pages 167-174.",
|
| 1731 |
+
"Hanlei Zhang, Hua Xu, Ting-En Lin, and Rui Lyu. 2021b. Discovering new intents with deep aligned clustering. In AAAI, pages 14365–14373.",
|
| 1732 |
+
"Hanlei Zhang, Hua Xu, Xin Wang, Fei Long, and Kai Gao. 2023a. A clustering framework for unsupervised and semi-supervised new intent discovery. IEEE TKDE, page 1-14.",
|
| 1733 |
+
"Ruoyu Zhang, Yanzeng Li, Yongliang Ma, Ming Zhou, and Lei Zou. 2023b. Llmaaa: Making large language models as active annotators. In *Findings of EMNLP*, pages 13088-13103.",
|
| 1734 |
+
"Yuwei Zhang, Zihan Wang, and Jingbo Shang. 2023c. Clusterllm: Large language models as a guide for text clustering. In EMNLP, pages 13903-13920.",
|
| 1735 |
+
"Yuwei Zhang, Haode Zhang, Li-Ming Zhan, Xiao-Ming Wu, and Albert Lam. 2022a. New intent discovery with pre-training and contrastive learning. In ACL, pages 256-269.",
|
| 1736 |
+
"Zhisong Zhang, Emma Strubell, and Eduard H. Hovy. 2022b. A survey of active learning for natural language processing. In EMNLP, pages 6166-6190.",
|
| 1737 |
+
"Zhun Zhong, Enrico Fini, Subhankar Roy, Zhiming Luo, Elisa Ricci, and Nicu Sebe. 2021. Neighborhood contrastive learning for novel class discovery. In CVPR, pages 10867-10875.",
|
| 1738 |
+
"Yunhua Zhou, Guofeng Quan, and Xipeng Qiu. 2023. A probabilistic framework for discovering new intents. In ACL, pages 3771-3784."
|
| 1739 |
+
],
|
| 1740 |
+
"bbox": [
|
| 1741 |
+
117,
|
| 1742 |
+
86,
|
| 1743 |
+
489,
|
| 1744 |
+
652
|
| 1745 |
+
],
|
| 1746 |
+
"page_idx": 11
|
| 1747 |
+
},
|
| 1748 |
+
{
|
| 1749 |
+
"type": "page_number",
|
| 1750 |
+
"text": "12",
|
| 1751 |
+
"bbox": [
|
| 1752 |
+
489,
|
| 1753 |
+
903,
|
| 1754 |
+
509,
|
| 1755 |
+
914
|
| 1756 |
+
],
|
| 1757 |
+
"page_idx": 11
|
| 1758 |
+
},
|
| 1759 |
+
{
|
| 1760 |
+
"type": "footer",
|
| 1761 |
+
"text": "7856",
|
| 1762 |
+
"bbox": [
|
| 1763 |
+
480,
|
| 1764 |
+
927,
|
| 1765 |
+
519,
|
| 1766 |
+
939
|
| 1767 |
+
],
|
| 1768 |
+
"page_idx": 11
|
| 1769 |
+
},
|
| 1770 |
+
{
|
| 1771 |
+
"type": "text",
|
| 1772 |
+
"text": "A Appendix",
|
| 1773 |
+
"text_level": 1,
|
| 1774 |
+
"bbox": [
|
| 1775 |
+
115,
|
| 1776 |
+
85,
|
| 1777 |
+
240,
|
| 1778 |
+
101
|
| 1779 |
+
],
|
| 1780 |
+
"page_idx": 12
|
| 1781 |
+
},
|
| 1782 |
+
{
|
| 1783 |
+
"type": "text",
|
| 1784 |
+
"text": "A.1 Dataset Statistics",
|
| 1785 |
+
"text_level": 1,
|
| 1786 |
+
"bbox": [
|
| 1787 |
+
115,
|
| 1788 |
+
111,
|
| 1789 |
+
302,
|
| 1790 |
+
124
|
| 1791 |
+
],
|
| 1792 |
+
"page_idx": 12
|
| 1793 |
+
},
|
| 1794 |
+
{
|
| 1795 |
+
"type": "text",
|
| 1796 |
+
"text": "We show the detailed statistics of BANKING, CLINC and StackOverflow datasets in Table 6. Specifically, BANKING is a fine-grained category discovery dataset collected from user dialogues in the banking domain. It contains over 13K user utterances that span over 77 distinct categories. CLINC is a multi-domain dataset, which encompasses 150 distinct categories and 22,500 utterances across 10 domains. StackOverflow is a technical question dataset collected from Kaggle.com, which includes 20K questions with 20 categories.",
|
| 1797 |
+
"bbox": [
|
| 1798 |
+
114,
|
| 1799 |
+
131,
|
| 1800 |
+
489,
|
| 1801 |
+
309
|
| 1802 |
+
],
|
| 1803 |
+
"page_idx": 12
|
| 1804 |
+
},
|
| 1805 |
+
{
|
| 1806 |
+
"type": "text",
|
| 1807 |
+
"text": "A.2 Implementation Details",
|
| 1808 |
+
"text_level": 1,
|
| 1809 |
+
"bbox": [
|
| 1810 |
+
115,
|
| 1811 |
+
321,
|
| 1812 |
+
352,
|
| 1813 |
+
336
|
| 1814 |
+
],
|
| 1815 |
+
"page_idx": 12
|
| 1816 |
+
},
|
| 1817 |
+
{
|
| 1818 |
+
"type": "text",
|
| 1819 |
+
"text": "For the dataset setup, following Zhang et al. (2023a), we randomly select a specified ratio $\\{25\\%, 50\\}$ of categories, denoted as known category rate (KCR), to serve as known categories. For each known category, $10\\%$ of labeled samples are selected to constitute a labeled dataset $\\mathcal{D}_l$ , while the remaining samples are deemed as unlabeled data, forming the unlabeled dataset $\\mathcal{D}_u$ .",
|
| 1820 |
+
"bbox": [
|
| 1821 |
+
114,
|
| 1822 |
+
342,
|
| 1823 |
+
489,
|
| 1824 |
+
469
|
| 1825 |
+
],
|
| 1826 |
+
"page_idx": 12
|
| 1827 |
+
},
|
| 1828 |
+
{
|
| 1829 |
+
"type": "text",
|
| 1830 |
+
"text": "For the Uncertainty Propagation, we set the freedom $\\alpha$ in Equation (1) to 1.0. The number of propagated neighbors is specifically set to 25 for all datasets. The $\\rho$ for calculating similarities in Equation (4) is set to 1.0.",
|
| 1831 |
+
"bbox": [
|
| 1832 |
+
114,
|
| 1833 |
+
470,
|
| 1834 |
+
489,
|
| 1835 |
+
549
|
| 1836 |
+
],
|
| 1837 |
+
"page_idx": 12
|
| 1838 |
+
},
|
| 1839 |
+
{
|
| 1840 |
+
"type": "text",
|
| 1841 |
+
"text": "For the Comparison-based Prompting, we employ the gpt-3.5-turbo as the basic LLM in our experiments. While acquiring supervision signals, the temperature is set to 0 for deterministic outputs, and the maximum tokens are constrained to 256. The default values are retained for the rest of the parameters. The number of representative samples is specifically set to 38 for the BANKING dataset, 75 for the CLINC dataset, and 20 for the StackOverflow dataset.",
|
| 1842 |
+
"bbox": [
|
| 1843 |
+
114,
|
| 1844 |
+
552,
|
| 1845 |
+
489,
|
| 1846 |
+
712
|
| 1847 |
+
],
|
| 1848 |
+
"page_idx": 12
|
| 1849 |
+
},
|
| 1850 |
+
{
|
| 1851 |
+
"type": "text",
|
| 1852 |
+
"text": "A.3 Evaluation Metrics",
|
| 1853 |
+
"text_level": 1,
|
| 1854 |
+
"bbox": [
|
| 1855 |
+
115,
|
| 1856 |
+
724,
|
| 1857 |
+
317,
|
| 1858 |
+
738
|
| 1859 |
+
],
|
| 1860 |
+
"page_idx": 12
|
| 1861 |
+
},
|
| 1862 |
+
{
|
| 1863 |
+
"type": "text",
|
| 1864 |
+
"text": "In the experiments, we employ three standard evaluation metrics: ACC, ARI, and NMI to evaluate the GCD performance. Specifically, ACC measures the performance of GCD by comparing the predicted labels with the ground-truth labels. The definition of ACC is as follows:",
|
| 1865 |
+
"bbox": [
|
| 1866 |
+
114,
|
| 1867 |
+
745,
|
| 1868 |
+
489,
|
| 1869 |
+
840
|
| 1870 |
+
],
|
| 1871 |
+
"page_idx": 12
|
| 1872 |
+
},
|
| 1873 |
+
{
|
| 1874 |
+
"type": "equation",
|
| 1875 |
+
"text": "\n$$\nA C C = \\frac {\\sum_ {i = 1} ^ {N} \\mathbb {1} _ {y _ {i} = m a p (\\hat {y} _ {i})}}{N}\n$$\n",
|
| 1876 |
+
"text_format": "latex",
|
| 1877 |
+
"bbox": [
|
| 1878 |
+
196,
|
| 1879 |
+
851,
|
| 1880 |
+
406,
|
| 1881 |
+
885
|
| 1882 |
+
],
|
| 1883 |
+
"page_idx": 12
|
| 1884 |
+
},
|
| 1885 |
+
{
|
| 1886 |
+
"type": "table",
|
| 1887 |
+
"img_path": "images/f4b9deb13103a503f6725ee5e3fcc1f37838fa317bdac440b93b9c21adfe2052.jpg",
|
| 1888 |
+
"table_caption": [],
|
| 1889 |
+
"table_footnote": [],
|
| 1890 |
+
"table_body": "<table><tr><td>Dataset</td><td>Domain</td><td>Categories</td><td>Utterances</td></tr><tr><td>BANKING</td><td>banking</td><td>77</td><td>13,083</td></tr><tr><td>CLINC</td><td>multi-domain</td><td>150</td><td>22,500</td></tr><tr><td>StackOverflow</td><td>question</td><td>20</td><td>20,000</td></tr></table>",
|
| 1891 |
+
"bbox": [
|
| 1892 |
+
510,
|
| 1893 |
+
82,
|
| 1894 |
+
877,
|
| 1895 |
+
149
|
| 1896 |
+
],
|
| 1897 |
+
"page_idx": 12
|
| 1898 |
+
},
|
| 1899 |
+
{
|
| 1900 |
+
"type": "text",
|
| 1901 |
+
"text": "Table 6: Statistics of datasets used in the experiments.",
|
| 1902 |
+
"bbox": [
|
| 1903 |
+
509,
|
| 1904 |
+
156,
|
| 1905 |
+
877,
|
| 1906 |
+
171
|
| 1907 |
+
],
|
| 1908 |
+
"page_idx": 12
|
| 1909 |
+
},
|
| 1910 |
+
{
|
| 1911 |
+
"type": "text",
|
| 1912 |
+
"text": "where $\\{\\hat{y}_i, y_i\\}$ denote the predicted label and the ground-truth label for a given sample $x_i$ respectively. map() is a mapping function that maps each predicted label $\\hat{y}_i$ to its corresponding ground-truth label $y_i$ by Hungarian algorithm.",
|
| 1913 |
+
"bbox": [
|
| 1914 |
+
507,
|
| 1915 |
+
181,
|
| 1916 |
+
880,
|
| 1917 |
+
261
|
| 1918 |
+
],
|
| 1919 |
+
"page_idx": 12
|
| 1920 |
+
},
|
| 1921 |
+
{
|
| 1922 |
+
"type": "text",
|
| 1923 |
+
"text": "ARI calculates the similarity between the predicted and ground-truth clusters, assessing the accuracy of clustering on a pairwise basis. ARI is defined as:",
|
| 1924 |
+
"bbox": [
|
| 1925 |
+
507,
|
| 1926 |
+
262,
|
| 1927 |
+
880,
|
| 1928 |
+
324
|
| 1929 |
+
],
|
| 1930 |
+
"page_idx": 12
|
| 1931 |
+
},
|
| 1932 |
+
{
|
| 1933 |
+
"type": "equation",
|
| 1934 |
+
"text": "\n$$\nA R I = \\frac {\\sum_ {i , j} \\binom {n _ {i , j}} {2} - [ \\sum_ {i} \\binom {u _ {i}} {2} \\sum_ {j} \\binom {v _ {j}} {2} ] / \\binom {N} {2}}{\\frac {1}{2} [ \\sum_ {i} \\binom {u _ {i}} {2} + \\sum_ {j} \\binom {v _ {j}} {2} ] - [ \\sum_ {i} \\binom {u _ {i}} {2} \\sum_ {j} \\binom {v _ {j}} {2} ] / \\binom {N} {2}}\n$$\n",
|
| 1935 |
+
"text_format": "latex",
|
| 1936 |
+
"bbox": [
|
| 1937 |
+
527,
|
| 1938 |
+
330,
|
| 1939 |
+
860,
|
| 1940 |
+
366
|
| 1941 |
+
],
|
| 1942 |
+
"page_idx": 12
|
| 1943 |
+
},
|
| 1944 |
+
{
|
| 1945 |
+
"type": "text",
|
| 1946 |
+
"text": "where $u_{i} = \\sum_{j} n_{i,j}$ , and $v_{j} = \\sum_{i} n_{i,j}$ . $N$ denotes the number of all samples. $n_{i,j}$ is the number of sample pairs that are both assigned to $i^{th}$ predicted cluster and $j^{th}$ ground-truth cluster.",
|
| 1947 |
+
"bbox": [
|
| 1948 |
+
507,
|
| 1949 |
+
370,
|
| 1950 |
+
880,
|
| 1951 |
+
435
|
| 1952 |
+
],
|
| 1953 |
+
"page_idx": 12
|
| 1954 |
+
},
|
| 1955 |
+
{
|
| 1956 |
+
"type": "text",
|
| 1957 |
+
"text": "NMI computes the normalized mutual information to quantify the agreement between the predicted and ground-truth clusters, providing a measure of clustering consistency. It can be calculated as follows:",
|
| 1958 |
+
"bbox": [
|
| 1959 |
+
507,
|
| 1960 |
+
436,
|
| 1961 |
+
880,
|
| 1962 |
+
514
|
| 1963 |
+
],
|
| 1964 |
+
"page_idx": 12
|
| 1965 |
+
},
|
| 1966 |
+
{
|
| 1967 |
+
"type": "equation",
|
| 1968 |
+
"text": "\n$$\nN M I (\\hat {\\boldsymbol {y}}, \\boldsymbol {y}) = \\frac {2 \\cdot I (\\hat {\\boldsymbol {y}} , \\boldsymbol {y})}{H (\\hat {\\boldsymbol {y}}) + H (\\boldsymbol {y})}\n$$\n",
|
| 1969 |
+
"text_format": "latex",
|
| 1970 |
+
"bbox": [
|
| 1971 |
+
578,
|
| 1972 |
+
521,
|
| 1973 |
+
810,
|
| 1974 |
+
555
|
| 1975 |
+
],
|
| 1976 |
+
"page_idx": 12
|
| 1977 |
+
},
|
| 1978 |
+
{
|
| 1979 |
+
"type": "text",
|
| 1980 |
+
"text": "where $\\{\\hat{y},\\pmb {y}\\}$ denote the predicted labels and the ground-truth labels respectively. $I(\\hat{y},\\pmb {y})$ is the mutual information between $\\hat{\\pmb{y}}$ and $\\pmb{y}$ . $H(\\cdot)$ represents the entropy function.",
|
| 1981 |
+
"bbox": [
|
| 1982 |
+
507,
|
| 1983 |
+
561,
|
| 1984 |
+
880,
|
| 1985 |
+
626
|
| 1986 |
+
],
|
| 1987 |
+
"page_idx": 12
|
| 1988 |
+
},
|
| 1989 |
+
{
|
| 1990 |
+
"type": "text",
|
| 1991 |
+
"text": "A.4Baselines",
|
| 1992 |
+
"text_level": 1,
|
| 1993 |
+
"bbox": [
|
| 1994 |
+
509,
|
| 1995 |
+
636,
|
| 1996 |
+
633,
|
| 1997 |
+
650
|
| 1998 |
+
],
|
| 1999 |
+
"page_idx": 12
|
| 2000 |
+
},
|
| 2001 |
+
{
|
| 2002 |
+
"type": "text",
|
| 2003 |
+
"text": "In this work, we compare the proposed ALUP with the following representative baselines:",
|
| 2004 |
+
"bbox": [
|
| 2005 |
+
507,
|
| 2006 |
+
657,
|
| 2007 |
+
878,
|
| 2008 |
+
688
|
| 2009 |
+
],
|
| 2010 |
+
"page_idx": 12
|
| 2011 |
+
},
|
| 2012 |
+
{
|
| 2013 |
+
"type": "list",
|
| 2014 |
+
"sub_type": "text",
|
| 2015 |
+
"list_items": [
|
| 2016 |
+
"- DTC (Han et al., 2019): A semi-supervised deep clustering approach with a novel mechanism for estimating the number of intents based on labeled data.",
|
| 2017 |
+
"- CDAC+ (Lin et al., 2020): A pseudo-labeling approach that employs pairwise constraints and a target distribution as guiding factors in the learning of new categories.",
|
| 2018 |
+
"- DeepAligned (Zhang et al., 2021b): A semi-supervised approach that addresses the clustering inconsistency problem by using an alignment strategy for learning utterance embeddings."
|
| 2019 |
+
],
|
| 2020 |
+
"bbox": [
|
| 2021 |
+
507,
|
| 2022 |
+
689,
|
| 2023 |
+
880,
|
| 2024 |
+
881
|
| 2025 |
+
],
|
| 2026 |
+
"page_idx": 12
|
| 2027 |
+
},
|
| 2028 |
+
{
|
| 2029 |
+
"type": "page_number",
|
| 2030 |
+
"text": "13",
|
| 2031 |
+
"bbox": [
|
| 2032 |
+
487,
|
| 2033 |
+
902,
|
| 2034 |
+
510,
|
| 2035 |
+
915
|
| 2036 |
+
],
|
| 2037 |
+
"page_idx": 12
|
| 2038 |
+
},
|
| 2039 |
+
{
|
| 2040 |
+
"type": "footer",
|
| 2041 |
+
"text": "7857",
|
| 2042 |
+
"bbox": [
|
| 2043 |
+
480,
|
| 2044 |
+
927,
|
| 2045 |
+
519,
|
| 2046 |
+
940
|
| 2047 |
+
],
|
| 2048 |
+
"page_idx": 12
|
| 2049 |
+
},
|
| 2050 |
+
{
|
| 2051 |
+
"type": "table",
|
| 2052 |
+
"img_path": "images/7ae488a5c7bd76ffd6b8cb71fdb4eb6c9e2f356c0bca6ba32978c8472e7fa718.jpg",
|
| 2053 |
+
"table_caption": [],
|
| 2054 |
+
"table_footnote": [],
|
| 2055 |
+
"table_body": "<table><tr><td rowspan=\"2\">Cluster Num</td><td rowspan=\"2\">Methods</td><td colspan=\"3\">Banking77</td></tr><tr><td>ACC</td><td>ARI</td><td>NMI</td></tr><tr><td rowspan=\"3\">K=77 (gold)</td><td>USNID</td><td>65.85</td><td>56.53</td><td>81.94</td></tr><tr><td>CsePL</td><td>71.06</td><td>60.36</td><td>83.32</td></tr><tr><td>ALUP</td><td>74.61</td><td>62.64</td><td>84.06</td></tr><tr><td rowspan=\"3\">K=74 (predicted)</td><td>USNID</td><td>60.72</td><td>49.18</td><td>78.11</td></tr><tr><td>CsePL</td><td>69.75</td><td>56.70</td><td>81.30</td></tr><tr><td>ALUP</td><td>72.55</td><td>61.04</td><td>82.78</td></tr></table>",
|
| 2056 |
+
"bbox": [
|
| 2057 |
+
121,
|
| 2058 |
+
83,
|
| 2059 |
+
485,
|
| 2060 |
+
204
|
| 2061 |
+
],
|
| 2062 |
+
"page_idx": 13
|
| 2063 |
+
},
|
| 2064 |
+
{
|
| 2065 |
+
"type": "text",
|
| 2066 |
+
"text": "Table 7: Effect of estimating cluster number $K$ .",
|
| 2067 |
+
"bbox": [
|
| 2068 |
+
137,
|
| 2069 |
+
212,
|
| 2070 |
+
463,
|
| 2071 |
+
229
|
| 2072 |
+
],
|
| 2073 |
+
"page_idx": 13
|
| 2074 |
+
},
|
| 2075 |
+
{
|
| 2076 |
+
"type": "list",
|
| 2077 |
+
"sub_type": "text",
|
| 2078 |
+
"list_items": [
|
| 2079 |
+
"- ProbNID (Zhou et al., 2023): A probabilistic framework that capitalizes on the expectation-maximization algorithm, conceptualizing intent assignments as probable latent variables.",
|
| 2080 |
+
"- DCSC (Wei et al., 2022): A pseudo-labeling method involving the dual-task, which uses the SwAV algorithm and Sinkhorn-Knopp (Cuturi, 2013) to assign soft clusters.",
|
| 2081 |
+
"- MTP-CLNN (Zhang et al., 2022a): A two-stage method that enhances representation learning via a multi-task pre-training and a nearest neighbor contrastive learning for identifying new categories.",
|
| 2082 |
+
"- USNID (Zhang et al., 2023a): A framework supports both unsupervised and semi-supervised new intent discovery, incorporating an effective centroid initialization strategy designed to learn cluster representations by utilizing historical clustering information.",
|
| 2083 |
+
"- CsePL (Liang and Liao, 2023): A method that utilizes two-level contrastive learning with label semantic alignment to enhance the cluster semantics and a soft prompting strategy for discovering new intents."
|
| 2084 |
+
],
|
| 2085 |
+
"bbox": [
|
| 2086 |
+
115,
|
| 2087 |
+
256,
|
| 2088 |
+
487,
|
| 2089 |
+
640
|
| 2090 |
+
],
|
| 2091 |
+
"page_idx": 13
|
| 2092 |
+
},
|
| 2093 |
+
{
|
| 2094 |
+
"type": "text",
|
| 2095 |
+
"text": "We re-run the released code of ProbNID to get its results. The other baselines' results are retrieved from Zhang et al. (2023a).",
|
| 2096 |
+
"bbox": [
|
| 2097 |
+
115,
|
| 2098 |
+
644,
|
| 2099 |
+
487,
|
| 2100 |
+
692
|
| 2101 |
+
],
|
| 2102 |
+
"page_idx": 13
|
| 2103 |
+
},
|
| 2104 |
+
{
|
| 2105 |
+
"type": "text",
|
| 2106 |
+
"text": "B Estimate the Category Number K",
|
| 2107 |
+
"text_level": 1,
|
| 2108 |
+
"bbox": [
|
| 2109 |
+
115,
|
| 2110 |
+
708,
|
| 2111 |
+
445,
|
| 2112 |
+
725
|
| 2113 |
+
],
|
| 2114 |
+
"page_idx": 13
|
| 2115 |
+
},
|
| 2116 |
+
{
|
| 2117 |
+
"type": "text",
|
| 2118 |
+
"text": "In the complex task of generalized category discovery in real-world scenarios, accurately predicting the total number of categories, represented as $K$ , remains a significant challenge. Drawing from the methodologies proposed by Zhang et al. (2021b), our research leverages pre-initialized intent features to determine $K$ autonomously. We begin by assigning an initially large number of clusters, $K'$ , and then utilize a refined model to extract feature",
|
| 2119 |
+
"bbox": [
|
| 2120 |
+
114,
|
| 2121 |
+
737,
|
| 2122 |
+
489,
|
| 2123 |
+
881
|
| 2124 |
+
],
|
| 2125 |
+
"page_idx": 13
|
| 2126 |
+
},
|
| 2127 |
+
{
|
| 2128 |
+
"type": "text",
|
| 2129 |
+
"text": "representations from our training dataset. These representations are grouped into distinct clusters using the K-means algorithm. Clusters that are densely populated and demonstrate well-defined boundaries are recognized as valid category clusters. Conversely, smaller, less distinct clusters are considered less relevant and subsequently discarded. The selection criteria for this process can be outlined as follows.",
|
| 2130 |
+
"bbox": [
|
| 2131 |
+
507,
|
| 2132 |
+
85,
|
| 2133 |
+
882,
|
| 2134 |
+
230
|
| 2135 |
+
],
|
| 2136 |
+
"page_idx": 13
|
| 2137 |
+
},
|
| 2138 |
+
{
|
| 2139 |
+
"type": "equation",
|
| 2140 |
+
"text": "\n$$\nK = \\sum_ {i = 1} ^ {K ^ {\\prime}} \\delta (| S _ {i} | > \\rho),\n$$\n",
|
| 2141 |
+
"text_format": "latex",
|
| 2142 |
+
"bbox": [
|
| 2143 |
+
611,
|
| 2144 |
+
239,
|
| 2145 |
+
779,
|
| 2146 |
+
282
|
| 2147 |
+
],
|
| 2148 |
+
"page_idx": 13
|
| 2149 |
+
},
|
| 2150 |
+
{
|
| 2151 |
+
"type": "text",
|
| 2152 |
+
"text": "where $|S_{i}|$ is the $i$ -th grouped cluster size, $\\rho$ is the filtering threshold. $\\delta(\\cdot)$ denotes the indicator function, whose output is 1 if the condition is satisfied.",
|
| 2153 |
+
"bbox": [
|
| 2154 |
+
507,
|
| 2155 |
+
294,
|
| 2156 |
+
880,
|
| 2157 |
+
341
|
| 2158 |
+
],
|
| 2159 |
+
"page_idx": 13
|
| 2160 |
+
},
|
| 2161 |
+
{
|
| 2162 |
+
"type": "text",
|
| 2163 |
+
"text": "Experimental results are reported in Table 7. The comparative results show that the proposed ALUP incurs only a minor performance decline with the predicted category number. This indicates that our ALUP exhibits robustness in handling inaccurately predicted category number.",
|
| 2164 |
+
"bbox": [
|
| 2165 |
+
507,
|
| 2166 |
+
342,
|
| 2167 |
+
880,
|
| 2168 |
+
439
|
| 2169 |
+
],
|
| 2170 |
+
"page_idx": 13
|
| 2171 |
+
},
|
| 2172 |
+
{
|
| 2173 |
+
"type": "page_number",
|
| 2174 |
+
"text": "14",
|
| 2175 |
+
"bbox": [
|
| 2176 |
+
487,
|
| 2177 |
+
902,
|
| 2178 |
+
510,
|
| 2179 |
+
915
|
| 2180 |
+
],
|
| 2181 |
+
"page_idx": 13
|
| 2182 |
+
},
|
| 2183 |
+
{
|
| 2184 |
+
"type": "footer",
|
| 2185 |
+
"text": "7858",
|
| 2186 |
+
"bbox": [
|
| 2187 |
+
480,
|
| 2188 |
+
927,
|
| 2189 |
+
519,
|
| 2190 |
+
940
|
| 2191 |
+
],
|
| 2192 |
+
"page_idx": 13
|
| 2193 |
+
}
|
| 2194 |
+
]
|
2024/Actively Learn from LLMs with Uncertainty Propagation for Generalized Category Discovery/3e184059-006f-40ea-abc5-8e4f9a6e51eb_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/Actively Learn from LLMs with Uncertainty Propagation for Generalized Category Discovery/3e184059-006f-40ea-abc5-8e4f9a6e51eb_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c995fc196a3d0e08b269818c9220b955da80bfb8f099782f78c5cc60afc1d23f
|
| 3 |
+
size 678745
|
2024/Actively Learn from LLMs with Uncertainty Propagation for Generalized Category Discovery/full.md
ADDED
|
@@ -0,0 +1,402 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Actively Learn from LLMs with Uncertainty Propagation for Generalized Category Discovery
|
| 2 |
+
|
| 3 |
+
Jinggui Liang $^{1}$ , Lizi Liao $^{1}$ , Hao Fei $^{2}$ , Bobo Li $^{3}$ , Jing Jiang $^{1}$
|
| 4 |
+
|
| 5 |
+
$^{1}$ Singapore Management University, $^{2}$ National University of Singapore, $^{3}$ Wuhan University jg.liang.2023@phdcs.smu.edu.sg lzliao@smu.edu.sg haofei37@nus.edu.sg boboli@whu.edu.cn jingjiang@smu.edu.sg
|
| 6 |
+
|
| 7 |
+
# Abstract
|
| 8 |
+
|
| 9 |
+
Generalized category discovery faces a key issue: the lack of supervision for new and unseen data categories. Traditional methods typically combine supervised pretraining with self-supervised learning to create models, and then employ clustering for category identification. However, these approaches tend to become overly tailored to known categories, failing to fully resolve the core issue. Hence, we propose to integrate the feedback from LLMs into an active learning paradigm. Specifically, our method innovatively employs uncertainty propagation to select data samples from high-uncertainty regions, which are then labeled using LLMs through a comparison-based prompting scheme. This not only eases the labeling task but also enhances accuracy in identifying new categories. Additionally, a soft feedback propagation mechanism is introduced to minimize the spread of inaccurate feedback. Experiments on various datasets demonstrate our framework's efficacy and generalizability, significantly improving baseline models at a nominal average cost.
|
| 10 |
+
|
| 11 |
+
# 1 Introduction
|
| 12 |
+
|
| 13 |
+
Generalized Category Discovery (GCD) is a crucial task in open-world computing (Lin et al., 2020; Zhang et al., 2021b), where the goal is to automate the classification of partially labeled data. It uniquely challenges systems to not only recognize predefined categories but also to discover entirely new categories from a mix of labeled and unlabeled data (Yang et al., 2021; Zeng et al., 2022). This task mirrors the dynamic and evolving nature of real-world data, where new categories frequently emerge, necessitating models that can adapt and learn continually.
|
| 14 |
+
|
| 15 |
+

|
| 16 |
+
Figure 1: The active learning loop with propagated LLM feedback for model training.
|
| 17 |
+
|
| 18 |
+
In traditional GCD methods, the initial step often involves supervised pretraining on a labeled dataset to establish a foundational understanding of known categories (Zhong et al., 2021; Vaze et al., 2022). This is followed by self-supervised learning on unlabeled data or even contrastive learning, allowing the model to extract and learn patterns without explicit category labels (An et al., 2023). The final stage typically employs clustering techniques, like K-Means (MacQueen et al., 1967), to group similar data points, aiming to identify categories. However, this sequential process tends to imprint a bias towards the initially learned, known categories, thus limiting the model's ability to generalize to new, unseen categories (Mou et al., 2022). Such overfitting to familiar data restricts the scope of GCD, preventing it from fully embracing the open-world setting it is intended for.
|
| 19 |
+
|
| 20 |
+
Recently, Large Language Models (LLMs) such as GPT-4 (OpenAI, 2023), PaLM (Chowdhery et al., 2023), and LLaMA (Touvron et al., 2023) have shown extraordinary versatility across a broad range of NLP tasks, providing good quality super
|
| 21 |
+
|
| 22 |
+
vision signals for summarization (Liu et al., 2023), clustering (Zhang et al., 2023c), etc. Their ability to understand and generate nuanced language patterns makes them promising for supplementing the supervision of new categories in GCD. However, the direct application of LLMs in GCD, which typically involves processing and clustering thousands of samples, raises substantial challenges. The intensive computational demands of LLMs could lead to issues with data privacy, high latency, and increased costs, which are particularly problematic in large-scale GCD scenarios.
|
| 23 |
+
|
| 24 |
+
To circumvent the above challenges, integrating LLMs into an active learning framework presents a practical and efficient solution. This approach entails selectively using LLMs to provide supervision signals, especially in cases where the data is most uncertain or the categories are novel. However, this integration brings forth new challenges: optimizing the use of LLMs to ensure cost and time efficiency, and critically, ensuring the reliability of the feedback provided by LLMs. Effective strategies are needed to mitigate the risk of propagating incorrect feedback from LLMs.
|
| 25 |
+
|
| 26 |
+
Addressing these challenges, we propose a novel framework for GCD to Actively Learn from LLMs with Uncertainty Propagation, termed as ALUP. As shown in Figure 1, we begin by employing an uncertainty propagation strategy, which systematically identifies data samples in regions of high uncertainty – these are the areas where the model is least confident and, therefore, where LLM input could be most beneficial. The selected samples are then labeled using LLMs through a sophisticated comparison-based prompting technique. This method leverages the comparative strength of LLMs, making it easier for them to provide accurate feedback, especially for new and complex categories. To further enhance our approach, we incorporate a soft label propagation mechanism. This mechanism carefully extends the LLMs-generated feedback to similar, neighboring samples, effectively amplifying the value of each LLM query while minimizing the risk of propagating errors. Rigorous testing on diverse datasets has shown that our method not only significantly improves upon existing baseline models but also does so with a nominal increase in cost, offering a scalable, efficient, and effective solution for the
|
| 27 |
+
|
| 28 |
+
intricate problem of GCD.
|
| 29 |
+
|
| 30 |
+
The main contributions of this work can be summarized as follows:
|
| 31 |
+
|
| 32 |
+
- We developed an innovative active learning framework integrating LLMs' feedback for GCD, addressing the challenge of limited supervision for new data categories.
|
| 33 |
+
- We combined uncertainty-region based data selection and comparison-based LLMs prompting, significantly enhancing GCD accuracy and efficiency with soft propagation.
|
| 34 |
+
- Experiments demonstrated marked improvements over traditional GCD methods across diverse datasets, affirming the ALUP's effectiveness and resource efficiency.
|
| 35 |
+
|
| 36 |
+
# 2 Related Work
|
| 37 |
+
|
| 38 |
+
# 2.1 Generalized Category Discovery
|
| 39 |
+
|
| 40 |
+
Unsupervised Methods: The realm of GCD has been fundamentally shaped by unsupervised methods, focusing on learning cluster-friendly representations. These early methods (Xie et al., 2016; Yang et al., 2017; Padmasundari and Bangalore, 2018; Caron et al., 2018; Hadifar et al., 2019) laid the groundwork by using unsupervised clustering algorithms to group samples based on inherent similarities. Recent advancements, particularly with the emergence of LLMs, have brought a paradigm shift. The integration of LLMs in unsupervised GCD (De Raedt et al., 2023; Zhang et al., 2023c; Viswanathan et al., 2023) represents a novel direction, pushing the boundaries of category identification beyond traditional clustering techniques.
|
| 41 |
+
|
| 42 |
+
Semi-Supervised Methods: In contrast, semi-supervised GCD methods blend limited labeled data with possibly larger unlabeled data to enhance category discovery (Hsu et al., 2018, 2019; Han et al., 2019). Methods like CDAC+ (Lin et al., 2020) utilize labeled data to guide clustering, creating a synergy between supervised knowledge and unsupervised discovery. The two-stage scheme, involving base model pretraining and iterative optimization (Zhang et al., 2021a,b; Wu et al., 2022; Wei et al., 2022; Zhang et al., 2023a; Zhou et al., 2023; Mou et al., 2023), has gained popularity. It benefits from pseudo label signals generated by the pretrained model, although it often struggles with
|
| 43 |
+
|
| 44 |
+
the quality of pseudo labels and sample representations. Efforts to refine learning objectives, such as contrastive learning (Mou et al., 2022; Zhang et al., 2022a), aim to directly learn discriminative representations for new categories. Yet, the challenge remains in effectively decoupling pseudo label generation from representation learning (Wu et al., 2024), a gap our work addresses by introducing LLMs into the GCD.
|
| 45 |
+
|
| 46 |
+
# 2.2 Active Learning in the Era of LLMs
|
| 47 |
+
|
| 48 |
+
Traditional Active Learning (AL): AL has traditionally been a solution to the data scarcity problem in NLP (Ren et al., 2022; Zhang et al., 2022b), focusing on identifying and annotating informative samples. Various acquisition strategies have been employed, including uncertainty-based (Wang and Shang, 2014; Schröder et al., 2022; Yu et al., 2023), diversity-based (Sener and Savarese, 2018; Gissin and Shalev-Shwartz, 2019; Citovsky et al., 2021), and hybrid methods (Liu et al., 2018; Zhan et al., 2022). While effective, these methods still rely on expensive human expertise for annotation.
|
| 49 |
+
|
| 50 |
+
LLMs as a Game-Changer in AL: With the advent of LLMs, a new frontier in AL has been explored. LLMs are now being considered as cost-effective alternatives to human experts (Zhang et al., 2023c; Cheng et al., 2023; Zhang et al., 2023b; Margatina et al., 2023; Liao et al., 2023). For instance, Xiao et al. (2023) demonstrated the use of LLMs as active annotators, harnessing their ability to distill task-specific knowledge interactively. In our work, we further this exploration by applying AL with LLMs to GCD. Our unique contribution not only lies in the implementation of an uncertainty-driven propagation strategy to maximize the utility of LLMs in a cost-effective manner, but also in the design of a soft feedback propagation scheme to minimize the spread of inaccurate feedback.
|
| 51 |
+
|
| 52 |
+
# 3 Methodology
|
| 53 |
+
|
| 54 |
+
# 3.1 Problem Formulation
|
| 55 |
+
|
| 56 |
+
We study the GCD problem defined as follows: Assuming we have a known category set $\mathcal{C}_k$ and an unknown category set $\mathcal{C}_u$ , where $\{\mathcal{C}_k \cap \mathcal{C}_u\} = \emptyset$ and $|\mathcal{C}_k| + |\mathcal{C}_u| = K$ . Here $K$ is the total number of categories. Under the semi-supervised GCD set
|
| 57 |
+
|
| 58 |
+
ting, given a labeled data set $\mathcal{D}_l = \{(x_i,y_i)|y_i\in$ $\mathcal{C}_k\}_{i = 1}^L$ , and an unlabeled data set $\mathcal{D}_u = \{x_j\}_{j = 1}^U$ where the category of each $x_{j}$ belongs to $\{\mathcal{C}_k\cup \mathcal{C}_u\}$ the task is to learn a representation extractor $\mathcal{M}$ to identify all unknown categories from $\mathcal{D}_u$ and perform accurate clustering to classify each $x_{i}$ in $\{\mathcal{D}_l\cup \mathcal{D}_u\}$ into its corresponding category.
|
| 59 |
+
|
| 60 |
+
# 3.2 Approach Overview
|
| 61 |
+
|
| 62 |
+
General GCD methods typically first extract representations $\mathcal{Z} = \{\pmb {z}_i\}_{i = 1}^{\left|\mathcal{D}_l\cup \mathcal{D}_u\right|}$ via model $\mathcal{M}$ for each sample $x_{i}$ and then perform K-Means to locate cluster centers $\{\pmb {\mu}_i\}_{i = 1}^K$ for doing GCD. Our proposed ALUP builds upon existing GCD models and effectively incorporates LLMs' feedback in an active learning scheme.
|
| 63 |
+
|
| 64 |
+
Figure 2 depicts an overview of our ALUP framework for GCD. It encompasses three key designs: Uncertainty Propagation for sample selection, Comparison-based Prompting for soliciting LLMs' feedback, and Soft Feedback Propagation for wisely spreading the feedback. In what follows, we will detail these designs separately.
|
| 65 |
+
|
| 66 |
+
# 3.3 Uncertainty Propagation (UP)
|
| 67 |
+
|
| 68 |
+
Within the ALUP framework, we design the uncertainty propagation to select the most informative unlabeled samples that are representative of high-uncertainty regions. Note that given a general GCD model $\mathcal{M}$ , we can extract representations $z_{i}$ for each $x_{i}$ in the dataset and perform $K$ -means to locate cluster centers $\{\pmb{\mu}_k\}_{k=1}^K$ . To estimate the model predictive uncertainty, following Xie et al. (2016), we use the Student's $t$ -distribution to compute the probability of assigning the sample $x_{i}$ to each cluster $k$ :
|
| 69 |
+
|
| 70 |
+
$$
|
| 71 |
+
q _ {i k} = \frac {\left(1 + \| \boldsymbol {z} _ {i} - \boldsymbol {\mu} _ {k} \| ^ {2} / \alpha\right) ^ {- \frac {\alpha + 1}{2}}}{\sum_ {k ^ {\prime}} \left(1 + \| \boldsymbol {z} _ {i} - \boldsymbol {\mu} _ {k ^ {\prime}} \| ^ {2} / \alpha\right) ^ {- \frac {\alpha + 1}{2}}}, \tag {1}
|
| 72 |
+
$$
|
| 73 |
+
|
| 74 |
+
where $\alpha$ represents the degrees of freedom in the Student's $t$ -distribution. After obtaining the model predictive probabilities, we use the entropy (Lewis and Gale, 1994) to measure the uncertainty for each sample $x_{i}$ :
|
| 75 |
+
|
| 76 |
+
$$
|
| 77 |
+
u \left(x _ {i}\right) = - \sum_ {k = 1} ^ {K} q _ {i k} \log q _ {i k}. \tag {2}
|
| 78 |
+
$$
|
| 79 |
+
|
| 80 |
+
Here, a higher $u(x_{i})$ can indicate a higher likelihood of the model $\mathcal{M}$ incorrectly assigning $x_{i}$ to a
|
| 81 |
+
|
| 82 |
+

|
| 83 |
+
Figure 2: The overall ALUP framework. It consists of three main designs: Uncertainty Propagation for region-based sample selection, Comparison-based Prompting for soliciting more accurate LLM's feedback, and Soft Feedback Propagation for wisely spreading the feedback to boost both efficiency and effectiveness.
|
| 84 |
+
|
| 85 |
+
wrong cluster. However, directly adopting this individual uncertainty score for selecting samples can lead to suboptimal outcomes as it can be sensitive to outliers (Karamcheti et al., 2021). To address this issue, following Yu et al. (2023), we further measure the similarities between each sample and its neighbors and propagate the individual uncertainty score to neighbors. Specifically, for each data point $x_{i}$ , we first find its $k$ -nearest neighbors based on the Euclidean distance as:
|
| 86 |
+
|
| 87 |
+
$$
|
| 88 |
+
\mathcal {N} \left(x _ {i}\right) = \underset {\text {t o p} - k} {\operatorname {K N N}} \left(\boldsymbol {z} _ {i}, \mathcal {Z} ^ {u}\right), \tag {3}
|
| 89 |
+
$$
|
| 90 |
+
|
| 91 |
+
where $\mathcal{Z}^u$ denotes the representations of unlabeled samples and $\mathcal{N}(x_i)$ represents the set of nearest neighbors of $x_{i}$ . Then, we calculate the similarities between $x_{i}$ and its neighbors based on the radial basis function (RBF) (Scholkopf et al., 1997):
|
| 92 |
+
|
| 93 |
+
$$
|
| 94 |
+
\operatorname {s i m} \left(\boldsymbol {z} _ {i}, \boldsymbol {z} _ {j}\right) = \exp \left(- \rho \| \boldsymbol {z} _ {i} - \boldsymbol {z} _ {j} \| _ {2} ^ {2}\right), \tag {4}
|
| 95 |
+
$$
|
| 96 |
+
|
| 97 |
+
where $x_{j}\in \mathcal{N}(x_{i})$ and $\rho$ is a hyper-parameter that regulates the extent of uncertainty propagation. After measuring the similarities, we refine the uncertainty score of sample $x_{i}$ as:
|
| 98 |
+
|
| 99 |
+
$$
|
| 100 |
+
u (x _ {i}) = u (x _ {i}) + \frac {\sum_ {x _ {j} \in \mathcal {N} (x _ {i})} \operatorname {s i m} \left(\boldsymbol {z} _ {i} , \boldsymbol {z} _ {j}\right) \cdot u (x _ {j})}{\left| \mathcal {N} (x _ {i}) \right|}. \tag {5}
|
| 101 |
+
$$
|
| 102 |
+
|
| 103 |
+
After several rounds of uncertainty score propagation, we obtain the final uncertainty score $u(x_{i})$ .
|
| 104 |
+
|
| 105 |
+
Based on this score, we greedily select one sample $x_{i}^{q}$ from each cluster $c_{i}$ to form the sample set $\mathcal{Q}$ :
|
| 106 |
+
|
| 107 |
+
$$
|
| 108 |
+
x _ {i} ^ {q} = \underset {x _ {j} \in c _ {i}} {\operatorname {a r g m a x}} (u (x _ {j})). \tag {6}
|
| 109 |
+
$$
|
| 110 |
+
|
| 111 |
+
We emphasize that a sample will exhibit higher propagated uncertainty only when it and its neighboring samples both possess high uncertainty levels. Hence, we are selecting samples from uncertain regions. By actively obtaining feedback from LLMs for such samples in $\mathcal{Q}$ , we can significantly improve the model performance in GCD.
|
| 112 |
+
|
| 113 |
+
# 3.4 Comparison-based Prompting (CP)
|
| 114 |
+
|
| 115 |
+
Upon identifying the most informative unlabeled samples through the UP strategy, we need to query LLMs to obtain pseudo category labels for these samples. However, since the category labels of newly emerged categories remain unknown, it is infeasible to request LLMs to directly generate possibly a brand new label for each selected sample. To overcome this, we design a comparison-based prompting method from the clustering perspective, which prompts LLMs to classify a sample by comparing it with other samples representing distinct categories.
|
| 116 |
+
|
| 117 |
+
This CP method requires the selection of a representative sample for each category cluster. To this end, we first compute the distances of various samples within the cluster to its center $\pmb{\mu}_i$ , and then
|
| 118 |
+
|
| 119 |
+
select the sample closest to $\pmb{\mu}_i$ to represent this cluster. We denote this close-to-center sample as $\mu_{i}$ With these close-to-center samples $S = \{\mu_i\}_{i = 1}^K$ we construct the prompt to query LLMs as:
|
| 120 |
+
|
| 121 |
+
Cluster $[c_1]$ : Sample $[\mu_1]$ ; Cluster $[c_2]$ : Sample $[\mu_2];\ldots$ ; Cluster $[c_p]$ : Sample $[\mu_p]$ . Above is a list of samples representing distinct categories. Please identify one sample that shares the same or similar underlying category as the input sample from the provided list.
|
| 122 |
+
|
| 123 |
+
Here, $p$ is the number of representative samples used for the comparison. In our experiments, for each $x_{i}^{q}$ in $\mathcal{Q}$ , we empirically incorporate $p = |\mathcal{Q}| / 2$ representative samples that are closest to $x_{i}^{q}$ into the prompt. With this design, we can effectively utilize LLMs to classify the selected samples into their corresponding categories, denoted as $\mathcal{Q} = \{x_{i}^{q}, y_{i}^{LLM}\}_{i=1}^{K}$ , thus bypassing the requirement for explicit labels of unknown categories.
|
| 124 |
+
|
| 125 |
+
# 3.5 Soft Feedback Propagation (SFP)
|
| 126 |
+
|
| 127 |
+
By querying LLMs using the CP method, we can endow the selected unlabeled samples with their respective pseudo labels to augment the GCD models for discerning new categories. However, a performance gap persists between the partially and fully LLM-augmented GCD models. Given that the selection of the unlabeled samples is based on their model predictive uncertainty and neighboring uncertainty, and samples distributed close to each other are more likely to share the same category, we thus propose a soft feedback propagation mechanism to propagate the pseudo labels generated by LLMs across their similar neighbors, amplifying the utility of the feedback from LLMs without any additional cost. Specifically, for each $x_{i}^{q}$ in $\mathcal{Q}$ , we refine the model prediction $q_{j}$ of its uncertain neighbor $x_{j} \in \mathcal{N}(x_{i}^{q})$ in Equation (1) to propagate the LLM-generated pseudo label $y_{i}^{LLM}$ :
|
| 128 |
+
|
| 129 |
+
$$
|
| 130 |
+
\boldsymbol {q} _ {j} = \left(1 - \operatorname {s i m} \left(\boldsymbol {z} _ {j}, \boldsymbol {z} _ {i} ^ {q}\right)\right) \cdot \boldsymbol {q} _ {j} + \operatorname {s i m} \left(\boldsymbol {z} _ {j}, \boldsymbol {z} _ {i} ^ {q}\right) \cdot \boldsymbol {y} ^ {L L M}, \tag {7}
|
| 131 |
+
$$
|
| 132 |
+
|
| 133 |
+
$$
|
| 134 |
+
y _ {j} ^ {\text {p r o p}} = \left\{ \begin{array}{l l} y _ {i} ^ {L L M}, & \text {i f} \operatorname {a r g m a x} \left(\boldsymbol {q} _ {j}\right) = y _ {i} ^ {L L M} \\ - 1, & \text {o t h e r w i s e} \end{array} , \right. \tag {8}
|
| 135 |
+
$$
|
| 136 |
+
|
| 137 |
+
where $sim(\cdot, \cdot)$ denotes the similarity function defined in Equation (4). $\pmb{y}^{LLM}$ is a one-hot vector where the value of position $y_{i}^{LLM}$ is set to 1. To
|
| 138 |
+
|
| 139 |
+
interpret the Equation (8), we argue that when the uncertain neighbor $x_{j} \in \mathcal{N}(x_{i}^{q})$ is assigned to the same cluster as the LLM-labeled sample $x_{i}^{q}$ according to the refined $q_{j}$ , the pseudo label $y_{i}^{LLM}$ will be propagated to the $x_{j}$ . Otherwise, the $x_{j}$ will reject the pseudo label $y_{i}^{LLM}$ and remain as an unlabeled sample.
|
| 140 |
+
|
| 141 |
+
# 3.6 Model Optimization
|
| 142 |
+
|
| 143 |
+
After obtaining pseudo labels for the selected unlabeled samples in $\mathcal{Q}$ from LLMs and propagating these labels via SFP, we update the model using a supervised contrastive learning loss (Gao et al., 2021; Guo et al., 2022) as follows:
|
| 144 |
+
|
| 145 |
+
$$
|
| 146 |
+
\mathcal {L} = \sum_ {i = 1} ^ {L ^ {\prime}} - \frac {1}{| \mathcal {N} ^ {\prime} (x _ {i}) |} \sum_ {x _ {j} \in \mathcal {N} ^ {\prime} (x _ {i})} \log \frac {e ^ {s i m \left(z _ {i} , z _ {j}\right) / \tau}}{\sum_ {k \neq i} e ^ {s i m \left(z _ {i} , z _ {k}\right) / \tau}}, \tag {9}
|
| 147 |
+
$$
|
| 148 |
+
|
| 149 |
+
where $L^{\prime}$ denotes the total number of labeled samples, including both the original labeled samples and the newly labeled samples obtained via the CP and SFP. $\mathcal{N}'(x_i)$ is the set of samples sharing the same category label with $x_{i}$ . $\tau$ is the temperature.
|
| 150 |
+
|
| 151 |
+
# 4 Experiments
|
| 152 |
+
|
| 153 |
+
# 4.1 Datasets
|
| 154 |
+
|
| 155 |
+
We conduct experiments on three GCD datasets: BANKING (Casanueva et al., 2020), CLINC (Larson et al., 2019), and StackOverflow (Xu et al., 2015). The detailed statistics are reported in Appendix A.1. In our experiments, we keep the same train, development, and test splits as previous work (Liang and Liao, 2023). More experimental details are provided in the Appendix A.2.
|
| 156 |
+
|
| 157 |
+
# 4.2 Evaluation Metrics
|
| 158 |
+
|
| 159 |
+
Following (Zhang et al., 2022a; Liang and Liao, 2023), we adopt the three metrics for evaluating the GCD performance: Accuracy (ACC) based on the Hungarian algorithm, Adjusted Rand Index (ARI), and Normalized Mutual Information (NMI). The specific definitions are presented in Appendix A.3. It is worth noting that ACC is regarded as the primary metric for evaluation, with higher values indicating better GCD performance.
|
| 160 |
+
|
| 161 |
+
# 4.3 Baselines
|
| 162 |
+
|
| 163 |
+
We compare with the following SOTA GCD methods: DTC (Han et al., 2019), CDAC+ (Lin et al.,
|
| 164 |
+
|
| 165 |
+
2020), DeepAligned (Zhang et al., 2021b), ProbNID (Zhou et al., 2023), DCSC (Wei et al., 2022), MTP-CLNN (Zhang et al., 2022a), USNID (Zhang et al., 2023a), and the best-performing method CsePL (Liang and Liao, 2023). We leave the details of these baselines in Appendix A.4.
|
| 166 |
+
|
| 167 |
+
# 4.4 Main Results
|
| 168 |
+
|
| 169 |
+
# 4.4.1 GCD Performance Comparison
|
| 170 |
+
|
| 171 |
+
Table 1 presents the main GCD results of our proposed ALUP against existing baselines, where the peak performance is highlighted in bold. Generally speaking, our ALUP consistently outperforms all existing baselines across three datasets by large margins. We analyze the results as follows:
|
| 172 |
+
|
| 173 |
+
Comparison of different methods in GCD: Table 1 reveals that ALUP significantly outperforms the existing leading baselines, such as CsePL and USNID. For example, the proposed ALUP surpasses previous SOTA CsePL by margins of $2.51\%$ in ACC, $2.12\%$ in ARI, and $1.14\%$ in NMI on BANKING- $50\%$ . Notably, the performance gains are more pronounced when a larger number of categories remain unknown. For example, ALUP's ACC improves by $3.55\%$ on BANKING- $25\%$ . This proves that the ALUP can acquire effective supervision signals from LLMs, enhancing the model performance in discovering new categories.
|
| 174 |
+
|
| 175 |
+
Comparison of different datasets: We evaluate the performance of the ALUP framework on different datasets, including the single-domain, fine-grained BANKING dataset, and the multi-domain CLINC dataset. From Table 1, we can notice that all existing methods exhibit significantly lower performance on BANKING compared to CLINC, indicating that the single-domain fine-grained scenario is more challenging for GCD. However, ALUP achieves a more significant improvement of $1\% \sim 3\%$ on BANKING- $50\%$ compared with the CsePL, while only $0.8\% \sim 2\%$ on CLINC- $50\%$ . This observation further strengthens the benefits of our ALUP in providing effective supervision signals to cope with the challenges in fine-grained category discovery.
|
| 176 |
+
|
| 177 |
+
# 4.5 In-depth Analyses
|
| 178 |
+
|
| 179 |
+
In this subsection, we conduct further detailed analyses to explore the impact of each key component
|
| 180 |
+
|
| 181 |
+
within the proposed ALUP framework.
|
| 182 |
+
|
| 183 |
+
# 4.5.1 Effect of Uncertainty Propagation
|
| 184 |
+
|
| 185 |
+
Table 2 presents the experimental results of removing the UP strategy in Equation (5) from ALUP on the BANKING dataset. It observes a significant reduction in GCD performance across various known category ratios upon removal. In particular, the ACC of the ALUP decreases by $1.20\%$ while the ARI and NMI drop $1.34\%$ and $0.64\%$ on BANKING- $25\%$ , respectively. This indicates that the UP strategy can accurately identify the most informative samples for querying LLMs to boost the GCD model performance. It notably avoids selecting outliers with high model uncertainty but which are less beneficial for model learning.
|
| 186 |
+
|
| 187 |
+
# 4.5.2 Effect of Soft Feedback Propagation
|
| 188 |
+
|
| 189 |
+
We also explore the contribution of the SFP mechanism by comparing the model performance when omitting the feedback propagation from LLMs in Equation (8) with the standard ALUP. Table 2 illustrates a notable decline in model performance in the absence of SFP, with a decrease of $1.88\%$ in ACC, $1.67\%$ in ARI, and $0.38\%$ in NMI. Nevertheless, ALUP w/o SFP still slightly outperforms the best-performing baseline CsePL. We suggest that this observation can be explained by two main points: (1) The acquisition of supervision signals from LLMs for the informative samples is beneficial for enhancing the model's capacity to discover new categories. (2) The SFP strategy can effectively propagate the accurate supervision signals from LLMs, amplifying the utility of LLM's feedback while concurrently minimizing the risk of propagating errors.
|
| 190 |
+
|
| 191 |
+
In contrast to the SFP strategy, we also investigate the Hard Propagation strategy within the proposed ALUP (ALUP $w$ HP), where LLMs' feedback is directly extended to the neighboring samples without any control. As presented in Table 2, we can observe that the model performance significantly decreases using the hard propagation, descending even below the levels achieved by CsePL. This is probably due to the propagation of inaccurate supervision signals from LLMs, which introduces considerable noise into the model learning.
|
| 192 |
+
|
| 193 |
+
<table><tr><td rowspan="2">KCR</td><td rowspan="2">Methods</td><td colspan="3">BANKING</td><td colspan="3">CLINC</td><td colspan="3">StackOverflow</td></tr><tr><td>ACC</td><td>ARI</td><td>NMI</td><td>ACC</td><td>ARI</td><td>NMI</td><td>ACC</td><td>ARI</td><td>NMI</td></tr><tr><td rowspan="9">25%</td><td>DTC</td><td>31.75</td><td>19.09</td><td>55.59</td><td>56.90</td><td>41.92</td><td>79.35</td><td>29.54</td><td>17.51</td><td>29.96</td></tr><tr><td>CDAC+</td><td>48.00</td><td>33.74</td><td>66.39</td><td>66.24</td><td>50.02</td><td>84.68</td><td>51.61</td><td>30.99</td><td>46.16</td></tr><tr><td>DeepAligned</td><td>49.08</td><td>37.62</td><td>70.50</td><td>74.07</td><td>64.63</td><td>88.97</td><td>54.50</td><td>37.96</td><td>50.86</td></tr><tr><td>ProbNID</td><td>55.75</td><td>44.25</td><td>74.37</td><td>71.56</td><td>63.25</td><td>89.21</td><td>54.10</td><td>38.10</td><td>53.70</td></tr><tr><td>DCSC</td><td>60.15</td><td>49.75</td><td>78.18</td><td>79.89</td><td>72.68</td><td>91.70</td><td>-</td><td>-</td><td>-</td></tr><tr><td>MTP-CLNN</td><td>65.06</td><td>52.91</td><td>80.04</td><td>83.26</td><td>76.20</td><td>93.17</td><td>74.70</td><td>54.80</td><td>73.35</td></tr><tr><td>USNID</td><td>65.85</td><td>56.53</td><td>81.94</td><td>83.12</td><td>77.95</td><td>94.17</td><td>75.76</td><td>65.45</td><td>74.91</td></tr><tr><td>CsePL</td><td>71.06</td><td>60.36</td><td>83.32</td><td>86.16</td><td>79.65</td><td>94.07</td><td>79.47</td><td>64.92</td><td>74.88</td></tr><tr><td>ALUP</td><td>74.61</td><td>62.64</td><td>84.06</td><td>88.40</td><td>82.44</td><td>94.84</td><td>82.20</td><td>64.54</td><td>76.58</td></tr><tr><td rowspan="9">50%</td><td>DTC</td><td>49.85</td><td>37.05</td><td>69.46</td><td>64.39</td><td>50.44</td><td>83.01</td><td>52.92</td><td>37.38</td><td>49.80</td></tr><tr><td>CDAC+</td><td>48.55</td><td>34.97</td><td>67.30</td><td>68.01</td><td>54.87</td><td>86.00</td><td>51.79</td><td>30.88</td><td>46.21</td></tr><tr><td>DeepAligned</td><td>59.38</td><td>47.95</td><td>76.67</td><td>80.70</td><td>72.56</td><td>91.59</td><td>74.52</td><td>57.62</td><td>68.28</td></tr><tr><td>ProbNID</td><td>63.02</td><td>50.42</td><td>77.95</td><td>82.62</td><td>75.27</td><td>92.72</td><td>73.20</td><td>62.46</td><td>74.54</td></tr><tr><td>DCSC</td><td>68.30</td><td>56.94</td><td>81.19</td><td>84.57</td><td>78.82</td><td>93.75</td><td>-</td><td>-</td><td>-</td></tr><tr><td>MTP-CLNN</td><td>70.97</td><td>60.17</td><td>83.42</td><td>86.18</td><td>80.17</td><td>94.30</td><td>80.36</td><td>62.24</td><td>76.66</td></tr><tr><td>USNID</td><td>73.27</td><td>63.77</td><td>85.05</td><td>87.22</td><td>82.87</td><td>95.45</td><td>82.06</td><td>71.63</td><td>78.77</td></tr><tr><td>CsePL</td><td>76.94</td><td>66.66</td><td>85.65</td><td>88.66</td><td>83.14</td><td>95.09</td><td>85.68</td><td>71.99</td><td>80.28</td></tr><tr><td>ALUP</td><td>79.45</td><td>68.78</td><td>86.79</td><td>90.53</td><td>84.84</td><td>95.97</td><td>86.70</td><td>73.85</td><td>81.45</td></tr></table>
|
| 194 |
+
|
| 195 |
+
Table 1: Main performance results on the generalized category discovery across three public datasets. KCR denotes the known category rate.
|
| 196 |
+
|
| 197 |
+
<table><tr><td rowspan="2">KCR</td><td rowspan="2">Methods</td><td colspan="3">BANKING</td></tr><tr><td>ACC</td><td>ARI</td><td>NMI</td></tr><tr><td rowspan="4">25%</td><td>ALUP</td><td>74.61</td><td>62.64</td><td>84.06</td></tr><tr><td>- w/o UP</td><td>73.41</td><td>61.30</td><td>83.42</td></tr><tr><td>- w/o SFP</td><td>72.73</td><td>60.97</td><td>83.68</td></tr><tr><td>- w HP</td><td>70.24</td><td>59.08</td><td>82.32</td></tr><tr><td rowspan="4">50%</td><td>ALUP</td><td>79.45</td><td>68.78</td><td>86.79</td></tr><tr><td>- w/o UP</td><td>78.64</td><td>67.16</td><td>86.05</td></tr><tr><td>- w/o SFP</td><td>77.66</td><td>67.04</td><td>86.43</td></tr><tr><td>- w HP</td><td>75.60</td><td>64.33</td><td>84.72</td></tr></table>
|
| 198 |
+
|
| 199 |
+
Table 2: Ablation results on the BANKING dataset.
|
| 200 |
+
|
| 201 |
+

|
| 202 |
+
Figure 3: Effect of the number of propagated neighbors.
|
| 203 |
+
|
| 204 |
+
# 4.5.3 Number of Propagated Neighbors
|
| 205 |
+
|
| 206 |
+
To delve deeper into the effectiveness of the UP strategy within the ALUP framework, we conduct further experiments on the BANKING dataset to explore the effect of varying the number of propagated neighbors in unlabeled sample selection on the model performance. Figure 3 illustrates performance trends across different counts of propagated neighbors. Notably, as the number of propagated neighbors in Equation (3) increases, ALUP's performance improves, reaching an optimum with 25 propagated neighbors. Beyond this point, the model performance begins to decline. We hypothesize that this decrease might be attributed to the inclusion of samples with lower uncertainty, which potentially introduces significant noise into the process of unlabeled sample selection.
|
| 207 |
+
|
| 208 |
+
# 4.5.4 Effect of Representative Samples
|
| 209 |
+
|
| 210 |
+
To assess the effectiveness of the CP method, we examine the impact of varying the number $p$ of representative samples integrated into the prompt for querying LLMs on the model performance. Experiments are conducted with $p$ values set at {19, 38, 57, 77}, where 19 denotes about a quarter of the total cluster count. As detailed in Table 3, the optimal
|
| 211 |
+
|
| 212 |
+
GCD performance is achieved by integrating 38 representative samples into the prompt to acquire supervision signals from LLMs for the unlabeled samples. We suggest that the main reasons for this observation come from two aspects: (1) A smaller $p$ may potentially omit the representative samples sharing the same underlying category as the selected samples, possibly limiting LLMs' ability to offer the requisite supervision signals during comparisons with the integrated representative samples. (2) Conversely, incorporating a larger number of representative samples for the CP method results in an extended prompt length. This could lead LLMs to misclassify the chosen unlabeled samples into inaccurate categories, thereby negatively affecting the model's performance.
|
| 213 |
+
|
| 214 |
+
In our standard approach to the CP method, we select the single closest-to-center sample within each cluster as the representative for constructing prompts to query LLMs. Expanding our investigation into the CP method, we experiment with an alternative strategy involving a close-to-center set—specifically, the top 3 samples nearest to the cluster center—to represent distinct clusters for prompting LLMs to determine pseudo category labels. As illustrated in Table 4, the experimental results on BANKING-25% demonstrate marginal gains with this strategy, achieving an increase of no more than 0.5% across all three metrics. Nonetheless, it necessitates an increased querying cost with LLMs. Balancing the slight improvement in performance against the rise in costs, we thus opt for the more straightforward and cost-effective strategy of utilizing single closest-to-center samples within the CP method.
|
| 215 |
+
|
| 216 |
+
# 4.6 Impact of Different Base GCD Models
|
| 217 |
+
|
| 218 |
+
In our experiments, we select the most informative unlabeled samples based on the existing GCD models. To validate the effectiveness of the proposed ALUP, we also examine how its performance varies when different GCD models are integrated within ALUP on the BANKING- $50\%$ dataset. As depicted in Figure 4, we can observe consistent and significant improvements with the proposed ALUP. This demonstrates that the proposed ALUP framework is effective in acquiring supervision signals from LLMs to enhance the model performance of discovering new categories and is adaptable to
|
| 219 |
+
|
| 220 |
+
<table><tr><td rowspan="2">KCR</td><td rowspan="2">p</td><td colspan="3">BANKING</td></tr><tr><td>ACC</td><td>ARI</td><td>NMI</td></tr><tr><td rowspan="4">25%</td><td>19</td><td>73.70</td><td>61.40</td><td>83.58</td></tr><tr><td>38</td><td>74.61</td><td>62.64</td><td>84.06</td></tr><tr><td>57</td><td>72.01</td><td>60.86</td><td>83.04</td></tr><tr><td>77</td><td>71.36</td><td>59.58</td><td>82.64</td></tr><tr><td rowspan="4">50%</td><td>19</td><td>78.44</td><td>67.46</td><td>86.25</td></tr><tr><td>38</td><td>79.45</td><td>68.78</td><td>86.79</td></tr><tr><td>57</td><td>77.56</td><td>66.04</td><td>85.93</td></tr><tr><td>77</td><td>76.66</td><td>65.23</td><td>85.59</td></tr></table>
|
| 221 |
+
|
| 222 |
+
Table 3: Effect of the number of representative samples within the CP method.
|
| 223 |
+
|
| 224 |
+
<table><tr><td rowspan="2">Methods</td><td colspan="3">BANKING</td></tr><tr><td>ACC</td><td>ARI</td><td>NMI</td></tr><tr><td>ALUP-standard</td><td>74.61</td><td>62.64</td><td>84.06</td></tr><tr><td>ALUP-close-to-center set</td><td>74.87</td><td>63.07</td><td>84.39</td></tr></table>
|
| 225 |
+
|
| 226 |
+
Table 4: Performance of representative sample selection strategies within the CP method on BANKING-25%.
|
| 227 |
+
|
| 228 |
+

|
| 229 |
+
Figure 4: Performances of various base GCD models in ALUP on the BANKING-50%.
|
| 230 |
+
|
| 231 |
+
other GCD models.
|
| 232 |
+
|
| 233 |
+
# 4.7 Influence of Query Sample Number
|
| 234 |
+
|
| 235 |
+
We study the effect of varying the number of selected unlabeled samples for querying LLMs in Figure 5. It is observed that there is an increase in model performance corresponding to the rise in the number of samples selected for querying LLMs. Yet, this growth rate progressively diminishes as the LLMs' feedback is propagated, and selecting informative samples becomes more challenging with the increasing number of selected samples.
|
| 236 |
+
|
| 237 |
+

|
| 238 |
+
Figure 5: Effect of the number of query samples on BANKING- $50\%$ .
|
| 239 |
+
|
| 240 |
+
<table><tr><td rowspan="2">Methods</td><td colspan="3">BANKING</td></tr><tr><td>ACC</td><td>ARI</td><td>NMI</td></tr><tr><td>ALUP-gpt-3.5-turbo</td><td>74.61</td><td>62.64</td><td>84.06</td></tr><tr><td>ALUP-FlanT5-XXL</td><td>73.38</td><td>62.29</td><td>83.76</td></tr><tr><td>ALUP-text-embedding</td><td>71.95</td><td>61.48</td><td>83.7</td></tr></table>
|
| 241 |
+
|
| 242 |
+
Table 5: Effect of different LLMs.
|
| 243 |
+
|
| 244 |
+
# 4.8 Effect of Different LLMs
|
| 245 |
+
|
| 246 |
+
We also examine the performance impact of utilizing different LLMs within our ALUP. Specifically, we conduct experiments on BANKING-25%, comparing the performance of the closed-source gpt-3.5-turbo against the open-sourced FlanT5-XXL in deriving supervision signals. As shown in Table 5, the experimental results illustrate a marginal performance decrease when employing FlanT5-XXL compared to gpt-3.5-turbo. Despite this, the use of FlanT5-XXL still markedly outperforms the best-performing baseline CsePL, highlighting the adaptability of our ALUP to various LLMs.
|
| 247 |
+
|
| 248 |
+
Furthering our exploration into the mechanisms of LLMs' utilization in GCD, we evaluate the efficacy of our CP method against an alternative approach based on embedding similarity scores. For this comparison, we leverage the embedding model text-embedding-3-small from OpenAI to generate embeddings for both uncertain samples and cluster-representative samples, calculating their similarity scores to determine pseudo category labels. As reported in Table 5, the results demonstrate a drop in performance metrics using the embedding score method, underscoring the rationale of our CP method and its proficiency in capturing the nuanced semantic relationships essential for GCD.
|
| 249 |
+
|
| 250 |
+
# 5 Conclusion
|
| 251 |
+
|
| 252 |
+
In summary, our ALUP framework innovatively integrates Large Language Models with uncertainty propagation in generalized category discovery, marking a significant leap in the field. By employing comparison-based LLM prompting and a novel soft feedback propagation mechanism, ALUP adeptly identifies and categorizes new data with enhanced accuracy and efficiency. This approach not only surpasses traditional GCD methods but also minimizes the risk of error propagation, a critical advancement in handling real-world, dynamic datasets with LLMs. Future endeavors will focus on refining LLM integration, extending our methods to multi-modal data, and enhancing scalability and data privacy measures, furthering ALUP's potential in diverse and evolving open-world computing.
|
| 253 |
+
|
| 254 |
+
# Acknowledgments
|
| 255 |
+
|
| 256 |
+
This research is supported by the Ministry of Education, Singapore, under its AcRF Tier 2 Funding (Proposal ID: T2EP20123-0052). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of the Ministry of Education, Singapore.
|
| 257 |
+
|
| 258 |
+
# Limitations
|
| 259 |
+
|
| 260 |
+
While our ALUP framework marks a significant advance in Generalized Category Discovery using LLMs, it does have some limitations. The reliance on LLMs can introduce biases and inaccuracies, particularly in areas where these models have limited training data or exposure. Although our propagation method effectively reduces overall costs, the initial computational demands of LLMs may still pose scalability challenges, especially for resource-limited environments. Additionally, the framework currently focuses on textual data, which could limit its applicability in multi-modal data scenarios. Moreover, while our soft feedback propagation mechanism aims to minimize error spread, it is not immune to the risk of amplifying initial inaccuracies from LLM feedback. Finally, data privacy and security remain critical concerns in the use of external LLMs, necessitating ongoing vigilance and adaptation.
|
| 261 |
+
|
| 262 |
+
# References
|
| 263 |
+
|
| 264 |
+
Wenbin An, Feng Tian, Qinghua Zheng, Wei Ding, Qianying Wang, and Ping Chen. 2023. Generalized category discovery with decoupled prototypical network. In AAAI, pages 12527-12535.
|
| 265 |
+
Mathilde Caron, Piotr Bojanowski, Armand Joulin, and Matthijs Douze. 2018. Deep clustering for unsupervised learning of visual features. In ECCV, pages 139-156.
|
| 266 |
+
Inigo Casanueva, Tadas Temčinas, Daniela Gerz, Matthew Henderson, and Ivan Vulić. 2020. Efficient intent detection with dual sentence encoders. In NLP4ConvAI@ACL, pages 38-45.
|
| 267 |
+
Qinyuan Cheng, Xiaogui Yang, Tianxiang Sun, Linyang Li, and Xipeng Qiu. 2023. Improving contrastive learning of sentence embeddings from AI feedback. In Findings of ACL, pages 11122-11138.
|
| 268 |
+
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan First, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. 2023. Palm: Scaling language modeling with pathways. J. Mach. Learn. Res., pages 240:1-240:113.
|
| 269 |
+
Gui Citovsky, Giulia DeSalvo, Claudio Gentile, Lazaros Karydas, Anand Rajagopalan, Afshin Rostamizadeh, and Sanjiv Kumar. 2021. Batch active learning at scale. In NeurIPS, pages 11933-11944.
|
| 270 |
+
Marco Cuturi. 2013. Sinkhorn distances: Lightspeed computation of optimal transport. In NeurIPS, pages 2292-2300.
|
| 271 |
+
Maarten De Raedt, Frédéric Godin, Thomas Demeester, and Chris Develder. 2023. IDAS: Intent discovery with abstractive summarization. In NLP4ConvAI@ACL, pages 71-88.
|
| 272 |
+
Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. Simcse: Simple contrastive learning of sentence embeddings. In EMNLP, pages 6894-6910.
|
| 273 |
+
|
| 274 |
+
Daniel Gissin and Shai Shalev-Shwartz. 2019. Discriminative active learning. CoRR, abs/1907.06347.
|
| 275 |
+
Shasha Guo, Jing Zhang, Yanling Wang, Qianyi Zhang, Cuiping Li, and Hong Chen. 2022. DSM: Question generation over knowledge base via modeling diverse subgraphs with meta-learner. In EMNLP, pages 4194-4207.
|
| 276 |
+
Amir Hadifar, Lucas Sterckx, Thomas Demeester, and Chris Develder. 2019. A self-training approach for short text clustering. In RepL4NLP@ACL, pages 194-199.
|
| 277 |
+
Kai Han, Andrea Vedaldi, and Andrew Zisserman. 2019. Learning to discover novel visual categories via deep transfer clustering. In ICCV, pages 8400-8408.
|
| 278 |
+
Yen-Chang Hsu, Zhaoyang Lv, and Zsolt Kira. 2018. Learning to cluster in order to transfer across domains and tasks. In ICLR.
|
| 279 |
+
Yen-Chang Hsu, Zhaoyang Lv, Joel Schlosser, Phillip Odom, and Zsolt Kira. 2019. Multi-class classification without multi-class labels. In ICLR.
|
| 280 |
+
Siddharth Karamcheti, Ranjay Krishna, Li Fei-Fei, and Christopher Manning. 2021. Mind your outliers! investigating the negative impact of outliers on active learning for visual question answering. In ACLIJCNLP, pages 7265-7281.
|
| 281 |
+
Stefan Larson, Anish Mahendran, Joseph J. Peper, Christopher Clarke, Andrew Lee, Parker Hill, Jonathan K. Kummerfeld, Kevin Leach, Michael A. Laurenzano, Lingjia Tang, and Jason Mars. 2019. An evaluation dataset for intent classification and out-of-scope prediction. In EMNLP-IJCNLP, pages 1311-1316.
|
| 282 |
+
David D. Lewis and William A. Gale. 1994. A sequential algorithm for training text classifiers. In SIGIR, pages 3-12.
|
| 283 |
+
Jinggui Liang and Lizi Liao. 2023. ClusterPrompt: Cluster semantic enhanced prompt learning for new intent discovery. In Findings of EMNLP, pages 10468-10481.
|
| 284 |
+
Lizi Liao, Grace Hui Yang, and Chirag Shah. 2023. Proactive conversational agents in the post-chatgpt world. In SIGIR, pages 3452-3455.
|
| 285 |
+
Ting-En Lin, Hua Xu, and Hanlei Zhang. 2020. Discovering new intents via constrained deep adaptive clustering with cluster refinement. In AAAI, pages 8360-8367.
|
| 286 |
+
Ming Liu, Wray L. Buntine, and Gholamreza Haffari. 2018. Learning how to actively learn: A deep imitation learning approach. In ACL, pages 1874-1883.
|
| 287 |
+
|
| 288 |
+
Yixin Liu, Alexander R. Fabbri, Pengfei Liu, Dragomir Radev, and Arman Cohan. 2023. On learning to summarize with large language models as references. CoRR, abs/2305.14239.
|
| 289 |
+
James MacQueen et al. 1967. Some methods for classification and analysis of multivariate observations. In Proceedings of the fifth Berkeley symposium on mathematical statistics and probability, volume 1, pages 281-297.
|
| 290 |
+
Katerina Margatina, Timo Schick, Nikolaos Aletras, and Jane Dwivedi-Yu. 2023. Active learning principles for in-context learning with large language models. In Findings of EMNLP, pages 5011-5034.
|
| 291 |
+
Yutao Mou, Keqing He, Pei Wang, Yanan Wu, Jingang Wang, Wei Wu, and Weiran Xu. 2022. Watch the neighbors: A unified k-nearest neighbor contrastive learning framework for OOD intent discovery. In EMNLP, pages 1517-1529.
|
| 292 |
+
Yutao Mou, Xiaoshuai Song, Keqing He, Chen Zeng, Pei Wang, Jingang Wang, Yunsen Xian, and Weiran Xu. 2023. Decoupling pseudo label disambiguation and representation learning for generalized intent discovery. In ACL, pages 9661-9675.
|
| 293 |
+
OpenAI. 2023. GPT-4 technical report. CoRR, abs/2303.08774.
|
| 294 |
+
Padmasundari and Srinivas Bangalore. 2018. Intent discovery through unsupervised semantic text clustering. In INTERSPEECH, pages 606-610.
|
| 295 |
+
Pengzhen Ren, Yun Xiao, Xiaojun Chang, Po-Yao Huang, Zhihui Li, Brij B. Gupta, Xiaojiang Chen, and Xin Wang. 2022. A survey of deep active learning. ACM Comput. Surv., pages 180:1-180:40.
|
| 296 |
+
Bernhard Schölkopf, Kah Kay Sung, Christopher J. C. Burges, Federico Girosi, Partha Niyogi, Tomaso A. Poggio, and Vladimir Vapnik. 1997. Comparing support vector machines with gaussian kernels to radial basis function classifiers. IEEE Trans. Signal Process., pages 2758-2765.
|
| 297 |
+
Christopher Schröder, Andreas Niekler, and Martin Potthast. 2022. Revisiting uncertainty-based query strategies for active learning with transformers. In *Findings of ACL*, pages 2194–2203.
|
| 298 |
+
Ozan Sener and Silvio Savarese. 2018. Active learning for convolutional neural networks: A core-set approach. In *ICLR*.
|
| 299 |
+
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurélien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023. Llama: Open and efficient foundation language models. CoRR, abs/2302.13971.
|
| 300 |
+
|
| 301 |
+
Sagar Vaze, Kai Han, Andrea Vedaldi, and Andrew Zisserman. 2022. Generalized category discovery. In CVPR, pages 7482-7491.
|
| 302 |
+
Vijay Viswanathan, Kiril Gashteovski, Carolin Lawrence, Tongshuang Wu, and Graham Neubig. 2023. Large language models enable few-shot clustering.
|
| 303 |
+
Dan Wang and Yi Shang. 2014. A new active labeling method for deep learning. In IJCNN, pages 112-119.
|
| 304 |
+
Feng Wei, Zhenbo Chen, Zhenghong Hao, Fengxin Yang, Hua Wei, Bing Han, and Sheng Guo. 2022. Semi-supervised clustering with contrastive learning for discovering new intents. arXiv preprint arXiv:2201.07604.
|
| 305 |
+
Yuxia Wu, Tianhao Dai, Zhedong Zheng, and Lizi Liao. 2024. Active discovering new slots for task-oriented conversation. TASLP, pages 1-11.
|
| 306 |
+
Yuxia Wu, Lizi Liao, Xueming Qian, and Tat-Seng Chua. 2022. Semi-supervised new slot discovery with incremental clustering. In EMNLP Findings, pages 6207-6218.
|
| 307 |
+
Ruixuan Xiao, Yiwen Dong, Junbo Zhao, Runze Wu, Minmin Lin, Gang Chen, and Haobo Wang. 2023. Freearl: Towards human-free active learning in the era of large language models. In EMNLP, pages 14520-14535.
|
| 308 |
+
Junyuan Xie, Ross B. Girshick, and Ali Farhadi. 2016. Unsupervised deep embedding for clustering analysis. In ICML, pages 478-487.
|
| 309 |
+
Jiaming Xu, Peng Wang, Guanhua Tian, Bo Xu, Jun Zhao, Fangyuan Wang, and Hongwei Hao. 2015. Short text clustering via convolutional neural networks. In VS@HLT-NAACL, pages 62-69.
|
| 310 |
+
Bo Yang, Xiao Fu, Nicholas D. Sidiropoulos, and Mingyi Hong. 2017. Towards k-means-friendly spaces: Simultaneous deep learning and clustering. In ICML, pages 3861-3870.
|
| 311 |
+
Jingkang Yang, Kaiyang Zhou, Yixuan Li, and Ziwei Liu. 2021. Generalized out-of-distribution detection: A survey. CoRR, abs/2110.11334.
|
| 312 |
+
Yue Yu, Rongzhi Zhang, Ran Xu, Jieyu Zhang, Jiaming Shen, and Chao Zhang. 2023. Cold-start data selection for better few-shot language model fine-tuning: A prompt-based uncertainty propagation approach. In ACL, pages 2499-2521.
|
| 313 |
+
Weihao Zeng, Keqing He, Zechen Wang, Dayuan Fu, Guanting Dong, Ruotong Geng, Pei Wang, Jingang Wang, Chaobo Sun, Wei Wu, and Weiran Xu. 2022. Semi-supervised knowledge-grounded pre-training for task-oriented dialog systems. In SereTOD, pages 39-47.
|
| 314 |
+
|
| 315 |
+
Xueying Zhan, Qingzhong Wang, Kuan-Hao Huang, Haoyi Xiong, Dejing Dou, and Antoni B. Chan. 2022. A comparative survey of deep active learning. CoRR, abs/2203.13450.
|
| 316 |
+
Hanlei Zhang, Xiaoteng Li, Hua Xu, Panpan Zhang, Kang Zhao, and Kai Gao. 2021a. TEXTOIR: An integrated and visualized platform for text open intent recognition. In ACL-IJCNLP, pages 167-174.
|
| 317 |
+
Hanlei Zhang, Hua Xu, Ting-En Lin, and Rui Lyu. 2021b. Discovering new intents with deep aligned clustering. In AAAI, pages 14365–14373.
|
| 318 |
+
Hanlei Zhang, Hua Xu, Xin Wang, Fei Long, and Kai Gao. 2023a. A clustering framework for unsupervised and semi-supervised new intent discovery. IEEE TKDE, page 1-14.
|
| 319 |
+
Ruoyu Zhang, Yanzeng Li, Yongliang Ma, Ming Zhou, and Lei Zou. 2023b. Llmaaa: Making large language models as active annotators. In *Findings of EMNLP*, pages 13088-13103.
|
| 320 |
+
Yuwei Zhang, Zihan Wang, and Jingbo Shang. 2023c. Clusterllm: Large language models as a guide for text clustering. In EMNLP, pages 13903-13920.
|
| 321 |
+
Yuwei Zhang, Haode Zhang, Li-Ming Zhan, Xiao-Ming Wu, and Albert Lam. 2022a. New intent discovery with pre-training and contrastive learning. In ACL, pages 256-269.
|
| 322 |
+
Zhisong Zhang, Emma Strubell, and Eduard H. Hovy. 2022b. A survey of active learning for natural language processing. In EMNLP, pages 6166-6190.
|
| 323 |
+
Zhun Zhong, Enrico Fini, Subhankar Roy, Zhiming Luo, Elisa Ricci, and Nicu Sebe. 2021. Neighborhood contrastive learning for novel class discovery. In CVPR, pages 10867-10875.
|
| 324 |
+
Yunhua Zhou, Guofeng Quan, and Xipeng Qiu. 2023. A probabilistic framework for discovering new intents. In ACL, pages 3771-3784.
|
| 325 |
+
|
| 326 |
+
# A Appendix
|
| 327 |
+
|
| 328 |
+
# A.1 Dataset Statistics
|
| 329 |
+
|
| 330 |
+
We show the detailed statistics of BANKING, CLINC and StackOverflow datasets in Table 6. Specifically, BANKING is a fine-grained category discovery dataset collected from user dialogues in the banking domain. It contains over 13K user utterances that span over 77 distinct categories. CLINC is a multi-domain dataset, which encompasses 150 distinct categories and 22,500 utterances across 10 domains. StackOverflow is a technical question dataset collected from Kaggle.com, which includes 20K questions with 20 categories.
|
| 331 |
+
|
| 332 |
+
# A.2 Implementation Details
|
| 333 |
+
|
| 334 |
+
For the dataset setup, following Zhang et al. (2023a), we randomly select a specified ratio $\{25\%, 50\}$ of categories, denoted as known category rate (KCR), to serve as known categories. For each known category, $10\%$ of labeled samples are selected to constitute a labeled dataset $\mathcal{D}_l$ , while the remaining samples are deemed as unlabeled data, forming the unlabeled dataset $\mathcal{D}_u$ .
|
| 335 |
+
|
| 336 |
+
For the Uncertainty Propagation, we set the freedom $\alpha$ in Equation (1) to 1.0. The number of propagated neighbors is specifically set to 25 for all datasets. The $\rho$ for calculating similarities in Equation (4) is set to 1.0.
|
| 337 |
+
|
| 338 |
+
For the Comparison-based Prompting, we employ the gpt-3.5-turbo as the basic LLM in our experiments. While acquiring supervision signals, the temperature is set to 0 for deterministic outputs, and the maximum tokens are constrained to 256. The default values are retained for the rest of the parameters. The number of representative samples is specifically set to 38 for the BANKING dataset, 75 for the CLINC dataset, and 20 for the StackOverflow dataset.
|
| 339 |
+
|
| 340 |
+
# A.3 Evaluation Metrics
|
| 341 |
+
|
| 342 |
+
In the experiments, we employ three standard evaluation metrics: ACC, ARI, and NMI to evaluate the GCD performance. Specifically, ACC measures the performance of GCD by comparing the predicted labels with the ground-truth labels. The definition of ACC is as follows:
|
| 343 |
+
|
| 344 |
+
$$
|
| 345 |
+
A C C = \frac {\sum_ {i = 1} ^ {N} \mathbb {1} _ {y _ {i} = m a p (\hat {y} _ {i})}}{N}
|
| 346 |
+
$$
|
| 347 |
+
|
| 348 |
+
<table><tr><td>Dataset</td><td>Domain</td><td>Categories</td><td>Utterances</td></tr><tr><td>BANKING</td><td>banking</td><td>77</td><td>13,083</td></tr><tr><td>CLINC</td><td>multi-domain</td><td>150</td><td>22,500</td></tr><tr><td>StackOverflow</td><td>question</td><td>20</td><td>20,000</td></tr></table>
|
| 349 |
+
|
| 350 |
+
Table 6: Statistics of datasets used in the experiments.
|
| 351 |
+
|
| 352 |
+
where $\{\hat{y}_i, y_i\}$ denote the predicted label and the ground-truth label for a given sample $x_i$ respectively. map() is a mapping function that maps each predicted label $\hat{y}_i$ to its corresponding ground-truth label $y_i$ by Hungarian algorithm.
|
| 353 |
+
|
| 354 |
+
ARI calculates the similarity between the predicted and ground-truth clusters, assessing the accuracy of clustering on a pairwise basis. ARI is defined as:
|
| 355 |
+
|
| 356 |
+
$$
|
| 357 |
+
A R I = \frac {\sum_ {i , j} \binom {n _ {i , j}} {2} - [ \sum_ {i} \binom {u _ {i}} {2} \sum_ {j} \binom {v _ {j}} {2} ] / \binom {N} {2}}{\frac {1}{2} [ \sum_ {i} \binom {u _ {i}} {2} + \sum_ {j} \binom {v _ {j}} {2} ] - [ \sum_ {i} \binom {u _ {i}} {2} \sum_ {j} \binom {v _ {j}} {2} ] / \binom {N} {2}}
|
| 358 |
+
$$
|
| 359 |
+
|
| 360 |
+
where $u_{i} = \sum_{j} n_{i,j}$ , and $v_{j} = \sum_{i} n_{i,j}$ . $N$ denotes the number of all samples. $n_{i,j}$ is the number of sample pairs that are both assigned to $i^{th}$ predicted cluster and $j^{th}$ ground-truth cluster.
|
| 361 |
+
|
| 362 |
+
NMI computes the normalized mutual information to quantify the agreement between the predicted and ground-truth clusters, providing a measure of clustering consistency. It can be calculated as follows:
|
| 363 |
+
|
| 364 |
+
$$
|
| 365 |
+
N M I (\hat {\boldsymbol {y}}, \boldsymbol {y}) = \frac {2 \cdot I (\hat {\boldsymbol {y}} , \boldsymbol {y})}{H (\hat {\boldsymbol {y}}) + H (\boldsymbol {y})}
|
| 366 |
+
$$
|
| 367 |
+
|
| 368 |
+
where $\{\hat{y},\pmb {y}\}$ denote the predicted labels and the ground-truth labels respectively. $I(\hat{y},\pmb {y})$ is the mutual information between $\hat{\pmb{y}}$ and $\pmb{y}$ . $H(\cdot)$ represents the entropy function.
|
| 369 |
+
|
| 370 |
+
# A.4Baselines
|
| 371 |
+
|
| 372 |
+
In this work, we compare the proposed ALUP with the following representative baselines:
|
| 373 |
+
|
| 374 |
+
- DTC (Han et al., 2019): A semi-supervised deep clustering approach with a novel mechanism for estimating the number of intents based on labeled data.
|
| 375 |
+
- CDAC+ (Lin et al., 2020): A pseudo-labeling approach that employs pairwise constraints and a target distribution as guiding factors in the learning of new categories.
|
| 376 |
+
- DeepAligned (Zhang et al., 2021b): A semi-supervised approach that addresses the clustering inconsistency problem by using an alignment strategy for learning utterance embeddings.
|
| 377 |
+
|
| 378 |
+
<table><tr><td rowspan="2">Cluster Num</td><td rowspan="2">Methods</td><td colspan="3">Banking77</td></tr><tr><td>ACC</td><td>ARI</td><td>NMI</td></tr><tr><td rowspan="3">K=77 (gold)</td><td>USNID</td><td>65.85</td><td>56.53</td><td>81.94</td></tr><tr><td>CsePL</td><td>71.06</td><td>60.36</td><td>83.32</td></tr><tr><td>ALUP</td><td>74.61</td><td>62.64</td><td>84.06</td></tr><tr><td rowspan="3">K=74 (predicted)</td><td>USNID</td><td>60.72</td><td>49.18</td><td>78.11</td></tr><tr><td>CsePL</td><td>69.75</td><td>56.70</td><td>81.30</td></tr><tr><td>ALUP</td><td>72.55</td><td>61.04</td><td>82.78</td></tr></table>
|
| 379 |
+
|
| 380 |
+
Table 7: Effect of estimating cluster number $K$ .
|
| 381 |
+
|
| 382 |
+
- ProbNID (Zhou et al., 2023): A probabilistic framework that capitalizes on the expectation-maximization algorithm, conceptualizing intent assignments as probable latent variables.
|
| 383 |
+
- DCSC (Wei et al., 2022): A pseudo-labeling method involving the dual-task, which uses the SwAV algorithm and Sinkhorn-Knopp (Cuturi, 2013) to assign soft clusters.
|
| 384 |
+
- MTP-CLNN (Zhang et al., 2022a): A two-stage method that enhances representation learning via a multi-task pre-training and a nearest neighbor contrastive learning for identifying new categories.
|
| 385 |
+
- USNID (Zhang et al., 2023a): A framework supports both unsupervised and semi-supervised new intent discovery, incorporating an effective centroid initialization strategy designed to learn cluster representations by utilizing historical clustering information.
|
| 386 |
+
- CsePL (Liang and Liao, 2023): A method that utilizes two-level contrastive learning with label semantic alignment to enhance the cluster semantics and a soft prompting strategy for discovering new intents.
|
| 387 |
+
|
| 388 |
+
We re-run the released code of ProbNID to get its results. The other baselines' results are retrieved from Zhang et al. (2023a).
|
| 389 |
+
|
| 390 |
+
# B Estimate the Category Number K
|
| 391 |
+
|
| 392 |
+
In the complex task of generalized category discovery in real-world scenarios, accurately predicting the total number of categories, represented as $K$ , remains a significant challenge. Drawing from the methodologies proposed by Zhang et al. (2021b), our research leverages pre-initialized intent features to determine $K$ autonomously. We begin by assigning an initially large number of clusters, $K'$ , and then utilize a refined model to extract feature
|
| 393 |
+
|
| 394 |
+
representations from our training dataset. These representations are grouped into distinct clusters using the K-means algorithm. Clusters that are densely populated and demonstrate well-defined boundaries are recognized as valid category clusters. Conversely, smaller, less distinct clusters are considered less relevant and subsequently discarded. The selection criteria for this process can be outlined as follows.
|
| 395 |
+
|
| 396 |
+
$$
|
| 397 |
+
K = \sum_ {i = 1} ^ {K ^ {\prime}} \delta (| S _ {i} | > \rho),
|
| 398 |
+
$$
|
| 399 |
+
|
| 400 |
+
where $|S_{i}|$ is the $i$ -th grouped cluster size, $\rho$ is the filtering threshold. $\delta(\cdot)$ denotes the indicator function, whose output is 1 if the condition is satisfied.
|
| 401 |
+
|
| 402 |
+
Experimental results are reported in Table 7. The comparative results show that the proposed ALUP incurs only a minor performance decline with the predicted category number. This indicates that our ALUP exhibits robustness in handling inaccurately predicted category number.
|
2024/Actively Learn from LLMs with Uncertainty Propagation for Generalized Category Discovery/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4b4246b7ebf22a81ce23795ce29fb47b1be08534587c35bab8b00c8d0b73ffa3
|
| 3 |
+
size 556137
|
2024/Actively Learn from LLMs with Uncertainty Propagation for Generalized Category Discovery/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/Ada-LEval_ Evaluating long-context LLMs with length-adaptable benchmarks/8df20923-51bb-4d14-9407-d091f93cc639_content_list.json
ADDED
|
@@ -0,0 +1,2244 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"type": "text",
|
| 4 |
+
"text": "Ada-LEval: Evaluating long-context LLMs with length-adaptable benchmarks",
|
| 5 |
+
"text_level": 1,
|
| 6 |
+
"bbox": [
|
| 7 |
+
273,
|
| 8 |
+
90,
|
| 9 |
+
724,
|
| 10 |
+
130
|
| 11 |
+
],
|
| 12 |
+
"page_idx": 0
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"text": "Chonghua Wang $^{2*}$ , Haodong Duan $^{1\\dagger}$ , Songyang Zhang $^{1}$ , Dahua Lin $^{1}$ , Kai Chen $^{1‡}$",
|
| 17 |
+
"bbox": [
|
| 18 |
+
154,
|
| 19 |
+
145,
|
| 20 |
+
845,
|
| 21 |
+
164
|
| 22 |
+
],
|
| 23 |
+
"page_idx": 0
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"text": "$^{1}$ Shanghai AI Laboratory",
|
| 28 |
+
"bbox": [
|
| 29 |
+
396,
|
| 30 |
+
164,
|
| 31 |
+
603,
|
| 32 |
+
180
|
| 33 |
+
],
|
| 34 |
+
"page_idx": 0
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "text",
|
| 38 |
+
"text": "$^{2}$ Shanghai Jiao Tong University",
|
| 39 |
+
"bbox": [
|
| 40 |
+
371,
|
| 41 |
+
180,
|
| 42 |
+
630,
|
| 43 |
+
197
|
| 44 |
+
],
|
| 45 |
+
"page_idx": 0
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"type": "text",
|
| 49 |
+
"text": "philipwang@sjtu.edu.cn",
|
| 50 |
+
"bbox": [
|
| 51 |
+
366,
|
| 52 |
+
198,
|
| 53 |
+
633,
|
| 54 |
+
212
|
| 55 |
+
],
|
| 56 |
+
"page_idx": 0
|
| 57 |
+
},
|
| 58 |
+
{
|
| 59 |
+
"type": "text",
|
| 60 |
+
"text": "duanhaodong@pjlab.org.cn",
|
| 61 |
+
"bbox": [
|
| 62 |
+
354,
|
| 63 |
+
215,
|
| 64 |
+
647,
|
| 65 |
+
230
|
| 66 |
+
],
|
| 67 |
+
"page_idx": 0
|
| 68 |
+
},
|
| 69 |
+
{
|
| 70 |
+
"type": "text",
|
| 71 |
+
"text": "Abstract",
|
| 72 |
+
"text_level": 1,
|
| 73 |
+
"bbox": [
|
| 74 |
+
260,
|
| 75 |
+
260,
|
| 76 |
+
339,
|
| 77 |
+
275
|
| 78 |
+
],
|
| 79 |
+
"page_idx": 0
|
| 80 |
+
},
|
| 81 |
+
{
|
| 82 |
+
"type": "text",
|
| 83 |
+
"text": "Recently, the large language model (LLM) community has shown increasing interest in enhancing LLMs' capability to handle extremely long documents. As various long-text techniques and model architectures emerge, the precise and detailed evaluation of models' long-text capabilities has become increasingly important. Existing long-text evaluation benchmarks, such as L-Eval and LongBench, construct long-text test sets based on open-source datasets, focusing mainly on QA and summarization tasks. These datasets include test samples of varying lengths (from 2k to $32\\mathrm{k}+$ ) entangled together, making it challenging to assess model capabilities across different length ranges. Moreover, they do not cover the ultralong settings ( $100\\mathrm{k}+$ tokens) that the latest LLMs claim to achieve. In this paper, we introduce Ada-LEval, a length-adaptable benchmark for evaluating the long-context understanding of LLMs. Ada-LEval includes two challenging subsets, TSort and BestAnswer, which enable a more reliable evaluation of LLMs' long context capabilities. These benchmarks support intricate manipulation of the length of test cases, and can easily produce text samples up to 128k tokens. We evaluate 4 state-of-the-art closed-source API models and 6 open-source models with Ada-LEval. The evaluation results demonstrate the limitations of current LLMs, especially in ultra-long-context settings. Our code is available at https://github.com/open-compass/Ada-LEval.",
|
| 84 |
+
"bbox": [
|
| 85 |
+
141,
|
| 86 |
+
291,
|
| 87 |
+
460,
|
| 88 |
+
760
|
| 89 |
+
],
|
| 90 |
+
"page_idx": 0
|
| 91 |
+
},
|
| 92 |
+
{
|
| 93 |
+
"type": "text",
|
| 94 |
+
"text": "1 Introduction",
|
| 95 |
+
"text_level": 1,
|
| 96 |
+
"bbox": [
|
| 97 |
+
114,
|
| 98 |
+
777,
|
| 99 |
+
258,
|
| 100 |
+
791
|
| 101 |
+
],
|
| 102 |
+
"page_idx": 0
|
| 103 |
+
},
|
| 104 |
+
{
|
| 105 |
+
"type": "text",
|
| 106 |
+
"text": "Large Language Models (LLMs), typically based on large transformers trained on vast corpus, have shown exceptional abilities in memorization, comprehension, and reasoning (OpenAI, 2023; Touvron et al., 2023; Zheng et al., 2023). A critical factor",
|
| 107 |
+
"bbox": [
|
| 108 |
+
112,
|
| 109 |
+
804,
|
| 110 |
+
489,
|
| 111 |
+
885
|
| 112 |
+
],
|
| 113 |
+
"page_idx": 0
|
| 114 |
+
},
|
| 115 |
+
{
|
| 116 |
+
"type": "image",
|
| 117 |
+
"img_path": "images/ee782392c36a9d22dd51110e9eafd7bf7261fbd465adbca891d20b28176d375a.jpg",
|
| 118 |
+
"image_caption": [
|
| 119 |
+
"BestAnswer Task"
|
| 120 |
+
],
|
| 121 |
+
"image_footnote": [],
|
| 122 |
+
"bbox": [
|
| 123 |
+
517,
|
| 124 |
+
260,
|
| 125 |
+
877,
|
| 126 |
+
404
|
| 127 |
+
],
|
| 128 |
+
"page_idx": 0
|
| 129 |
+
},
|
| 130 |
+
{
|
| 131 |
+
"type": "image",
|
| 132 |
+
"img_path": "images/af05071cb1f4fe4eb124173188ac8e6ca895d5a5f50045137ab6afeecf61bab5.jpg",
|
| 133 |
+
"image_caption": [
|
| 134 |
+
"Figure 1: The demonstration of two tasks: TSort and BestAnswer introduced in Ada-LEval. Understanding and reasoning over the full text are required to solve these two tasks."
|
| 135 |
+
],
|
| 136 |
+
"image_footnote": [],
|
| 137 |
+
"bbox": [
|
| 138 |
+
515,
|
| 139 |
+
426,
|
| 140 |
+
877,
|
| 141 |
+
589
|
| 142 |
+
],
|
| 143 |
+
"page_idx": 0
|
| 144 |
+
},
|
| 145 |
+
{
|
| 146 |
+
"type": "text",
|
| 147 |
+
"text": "that affects LLM performance is the 'context window' - the number of tokens an LLM can process simultaneously. This window's size is pivotal in handling lengthy texts. Since the debut of ChatGPT with a 2,000-token window in November 2022, significant efforts have been made in this domain, including more efficient attention mechanisms (Dao et al., 2022a; Zaheer et al., 2020; Ding et al., 2023), scalable position embeddings (Su et al., 2021; Sun et al., 2022), and quantization techniques (Frantar et al., 2022; Dettmers et al., 2022). As of December 2023, several LLMs claim to achieve context windows up to hundreds of thousands of tokens. This includes both proprietary models like GPT-4 Turbo",
|
| 148 |
+
"bbox": [
|
| 149 |
+
507,
|
| 150 |
+
696,
|
| 151 |
+
884,
|
| 152 |
+
921
|
| 153 |
+
],
|
| 154 |
+
"page_idx": 0
|
| 155 |
+
},
|
| 156 |
+
{
|
| 157 |
+
"type": "page_footnote",
|
| 158 |
+
"text": "*The work was done during an internship at Shanghai AI Laboratory; † Project Lead; ‡ Corresponding Author.",
|
| 159 |
+
"bbox": [
|
| 160 |
+
112,
|
| 161 |
+
895,
|
| 162 |
+
487,
|
| 163 |
+
921
|
| 164 |
+
],
|
| 165 |
+
"page_idx": 0
|
| 166 |
+
},
|
| 167 |
+
{
|
| 168 |
+
"type": "page_number",
|
| 169 |
+
"text": "3712",
|
| 170 |
+
"bbox": [
|
| 171 |
+
480,
|
| 172 |
+
927,
|
| 173 |
+
519,
|
| 174 |
+
940
|
| 175 |
+
],
|
| 176 |
+
"page_idx": 0
|
| 177 |
+
},
|
| 178 |
+
{
|
| 179 |
+
"type": "footer",
|
| 180 |
+
"text": "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics:",
|
| 181 |
+
"bbox": [
|
| 182 |
+
137,
|
| 183 |
+
945,
|
| 184 |
+
857,
|
| 185 |
+
957
|
| 186 |
+
],
|
| 187 |
+
"page_idx": 0
|
| 188 |
+
},
|
| 189 |
+
{
|
| 190 |
+
"type": "footer",
|
| 191 |
+
"text": "Human Language Technologies (Volume 1: Long Papers), pages 3712-3724",
|
| 192 |
+
"bbox": [
|
| 193 |
+
267,
|
| 194 |
+
958,
|
| 195 |
+
729,
|
| 196 |
+
971
|
| 197 |
+
],
|
| 198 |
+
"page_idx": 0
|
| 199 |
+
},
|
| 200 |
+
{
|
| 201 |
+
"type": "footer",
|
| 202 |
+
"text": "June 16-21, 2024 ©2024 Association for Computational Linguistics",
|
| 203 |
+
"bbox": [
|
| 204 |
+
290,
|
| 205 |
+
972,
|
| 206 |
+
705,
|
| 207 |
+
985
|
| 208 |
+
],
|
| 209 |
+
"page_idx": 0
|
| 210 |
+
},
|
| 211 |
+
{
|
| 212 |
+
"type": "text",
|
| 213 |
+
"text": "(128,000 tokens), Claude-2.1 (200,000 tokens), and Moonshot AI (200,000 Chinese characters), and open-source models such as ChatGLM-32k (Zeng et al., 2022) and LongChat-32k (Li* et al., 2023). This expansion significantly enhances the potential for processing extensive documents. Nevertheless, the effectiveness of these long-context LLMs in managing long texts remains an area ripe for exploration and assessment.",
|
| 214 |
+
"bbox": [
|
| 215 |
+
112,
|
| 216 |
+
84,
|
| 217 |
+
489,
|
| 218 |
+
227
|
| 219 |
+
],
|
| 220 |
+
"page_idx": 1
|
| 221 |
+
},
|
| 222 |
+
{
|
| 223 |
+
"type": "text",
|
| 224 |
+
"text": "Alongside the evolution of LLMs, a wide range of benchmarks have emerged for capability assessment (Hendrycks et al., 2020; Suzgun et al., 2022; Cobbe et al., 2021; Huang et al., 2023). Most of those benchmarks utilize short questions or instructions, making them unsuitable for evaluating LLMs' long-context capabilities. While a few benchmarks do focus on assessing specific long-context abilities like summarization, question-answering (QA), and continue writing (Huang et al., 2021; Liu et al., 2023b; Dasigi et al., 2021), comprehensive long-document evaluations have been limited. Recent benchmarks such as SCROLLS (Shaham et al., 2022), L-Eval (An et al., 2023) and LongBench (Bai et al., 2023) have started to address this gap by including a suite of long-document tasks, aiming for a more holistic assessment of LLMs' long-context understanding.",
|
| 225 |
+
"bbox": [
|
| 226 |
+
115,
|
| 227 |
+
229,
|
| 228 |
+
489,
|
| 229 |
+
518
|
| 230 |
+
],
|
| 231 |
+
"page_idx": 1
|
| 232 |
+
},
|
| 233 |
+
{
|
| 234 |
+
"type": "text",
|
| 235 |
+
"text": "Despite these advancements, three significant limitations persist in existing benchmarks: Firstly, the ultra-long setting (32,000 tokens or longer) is scarcely represented, limiting insights into LLM performance in extreme context lengths. Secondly, the integration of test samples of varying lengths within these benchmarks complicates the evaluation of LLMs across different length ranges. Lastly, the focus on traditional tasks such as question-answering and summarization often does not necessitate comprehensive content understanding by the LLMs, as many questions in these tasks do not require full-text comprehension. This highlights the need for more targeted benchmarks that can rigorously evaluate the deep and complete understanding of long-form content by LLMs.",
|
| 236 |
+
"bbox": [
|
| 237 |
+
115,
|
| 238 |
+
519,
|
| 239 |
+
489,
|
| 240 |
+
775
|
| 241 |
+
],
|
| 242 |
+
"page_idx": 1
|
| 243 |
+
},
|
| 244 |
+
{
|
| 245 |
+
"type": "text",
|
| 246 |
+
"text": "To this end, we introduce Ada-LEval, a pioneering benchmark to assess the long-context capabilities with length-adaptable questions. Ada-LEval comprises two challenging tasks: TSort, which involves arranging text segments in the correct order, and BestAnswer, which requires choosing the best answer of a question among multiple candidates. Both tasks feature the following advantages: 1. Controllable Test Cases: The length of each test",
|
| 247 |
+
"bbox": [
|
| 248 |
+
112,
|
| 249 |
+
777,
|
| 250 |
+
489,
|
| 251 |
+
921
|
| 252 |
+
],
|
| 253 |
+
"page_idx": 1
|
| 254 |
+
},
|
| 255 |
+
{
|
| 256 |
+
"type": "text",
|
| 257 |
+
"text": "case can be finely tuned - by adjusting the number and length of text segments in TSort and altering the number of distractor options in BestAnswer. 2. Necessity for Full-Text Comprehension: Successful completion of both tasks mandates complete reading and understanding of the provided text. 3. Precise Accuracy Measurement: The design of these tasks allows for unambiguous accuracy calculation. TSort has a definitive 'correct' order, whereas in BestAnswer, the annotated responses by the questioner serve as definitive answers.",
|
| 258 |
+
"bbox": [
|
| 259 |
+
507,
|
| 260 |
+
84,
|
| 261 |
+
884,
|
| 262 |
+
260
|
| 263 |
+
],
|
| 264 |
+
"page_idx": 1
|
| 265 |
+
},
|
| 266 |
+
{
|
| 267 |
+
"type": "text",
|
| 268 |
+
"text": "Our experiments on these tasks reveal critical insights. We observe a noteworthy decline in the performance of existing LLMs as text length increases, particularly in ultra-long scenarios. Furthermore, our ablation study uncovers several shortcomings in current LLMs, including limited instruction following over extended texts and pronounced input order bias. Additionally, we explore various scalable position embedding techniques aimed at enlarging the context window of LLMs. Our findings indicate that models equipped with those techniques show improved performance over the standard models, and the performance is comparable to their counterparts trained on longer contexts.",
|
| 269 |
+
"bbox": [
|
| 270 |
+
507,
|
| 271 |
+
262,
|
| 272 |
+
884,
|
| 273 |
+
487
|
| 274 |
+
],
|
| 275 |
+
"page_idx": 1
|
| 276 |
+
},
|
| 277 |
+
{
|
| 278 |
+
"type": "text",
|
| 279 |
+
"text": "2 Related Work",
|
| 280 |
+
"text_level": 1,
|
| 281 |
+
"bbox": [
|
| 282 |
+
509,
|
| 283 |
+
501,
|
| 284 |
+
665,
|
| 285 |
+
516
|
| 286 |
+
],
|
| 287 |
+
"page_idx": 1
|
| 288 |
+
},
|
| 289 |
+
{
|
| 290 |
+
"type": "text",
|
| 291 |
+
"text": "2.1 Long-Context Techniques",
|
| 292 |
+
"text_level": 1,
|
| 293 |
+
"bbox": [
|
| 294 |
+
509,
|
| 295 |
+
527,
|
| 296 |
+
757,
|
| 297 |
+
544
|
| 298 |
+
],
|
| 299 |
+
"page_idx": 1
|
| 300 |
+
},
|
| 301 |
+
{
|
| 302 |
+
"type": "text",
|
| 303 |
+
"text": "To address the complexities introduced by the increased text length in language models, researchers have developed a range of innovative techniques. These methodologies primarily focus on the following key areas: more efficient attention mechanisms, divide-and-conquer paradigms, and scalable position embedding techniques.",
|
| 304 |
+
"bbox": [
|
| 305 |
+
507,
|
| 306 |
+
550,
|
| 307 |
+
884,
|
| 308 |
+
662
|
| 309 |
+
],
|
| 310 |
+
"page_idx": 1
|
| 311 |
+
},
|
| 312 |
+
{
|
| 313 |
+
"type": "text",
|
| 314 |
+
"text": "Efficient Attention Mechanisms. Notable advancements in attention mechanisms within Transformers have been achieved by several studies (Zaheer et al., 2020; Guo et al., 2021; Dao et al., 2022b; Ding et al., 2023). A key development in this area is Flash Attention (Dao et al., 2022a), which streamlines the attention process by circumventing the need to read and write the attention matrix across different memory tiers. This approach results in faster processing and reduced memory usage compared to traditional attention methods. In LongNet, Ding et al. (2023) introduces Dilated Attention, which reduces the computation complexity of attention to nearly linear and scales to 1 billion tokens. However, Liu et al. (2023a) identified a limitation where these mechanisms tend to falter with the",
|
| 315 |
+
"bbox": [
|
| 316 |
+
507,
|
| 317 |
+
664,
|
| 318 |
+
885,
|
| 319 |
+
920
|
| 320 |
+
],
|
| 321 |
+
"page_idx": 1
|
| 322 |
+
},
|
| 323 |
+
{
|
| 324 |
+
"type": "page_number",
|
| 325 |
+
"text": "3713",
|
| 326 |
+
"bbox": [
|
| 327 |
+
480,
|
| 328 |
+
928,
|
| 329 |
+
519,
|
| 330 |
+
940
|
| 331 |
+
],
|
| 332 |
+
"page_idx": 1
|
| 333 |
+
},
|
| 334 |
+
{
|
| 335 |
+
"type": "text",
|
| 336 |
+
"text": "middle portions of long texts.",
|
| 337 |
+
"bbox": [
|
| 338 |
+
112,
|
| 339 |
+
84,
|
| 340 |
+
334,
|
| 341 |
+
99
|
| 342 |
+
],
|
| 343 |
+
"page_idx": 2
|
| 344 |
+
},
|
| 345 |
+
{
|
| 346 |
+
"type": "text",
|
| 347 |
+
"text": "Divide-and-Conquer. In exploring alternatives to conventional long-text modeling, several studies have adopted a segmented approach to manage extensive content. WebGPT (Nakano et al., 2021) addresses long-form QA by interacting with a text-based web-browsing environment. PEARL (Sun et al., 2023) introduces a framework that prompts LLMs to generate and execute plans for tackling complex long-text reasoning tasks. Chen et al. (2023a) constructs a memory tree with the summarization of document segments and navigates on the memory tree to answer the original question.",
|
| 348 |
+
"bbox": [
|
| 349 |
+
112,
|
| 350 |
+
99,
|
| 351 |
+
487,
|
| 352 |
+
292
|
| 353 |
+
],
|
| 354 |
+
"page_idx": 2
|
| 355 |
+
},
|
| 356 |
+
{
|
| 357 |
+
"type": "text",
|
| 358 |
+
"text": "Scalable Position Embeddings. Scalable position embeddings have been instrumental in extending the context window of LLMs. RoPE (Su et al., 2021) utilizes a rotation matrix to enhance positional information, integrating explicit relative position dependencies into the self-attention mechanism. ALiBi (Press et al., 2021) does not add position embeddings to word embeddings, instead applying a linearly decreasing penalty to attention scores based on key-query distances. Position Interpolation (Chen et al., 2023b) adopts a different strategy by linearly scaling down input position indices to align with preset context window sizes, requiring few fine-tuning steps. NTK-aware Scaled RoPE<sup>1</sup> and ReRoPE (Su, 2023) further combine the benefits of position interpolation and length extrapolation methods without any fine-tuning steps.",
|
| 359 |
+
"bbox": [
|
| 360 |
+
115,
|
| 361 |
+
294,
|
| 362 |
+
489,
|
| 363 |
+
567
|
| 364 |
+
],
|
| 365 |
+
"page_idx": 2
|
| 366 |
+
},
|
| 367 |
+
{
|
| 368 |
+
"type": "text",
|
| 369 |
+
"text": "2.2 Long-Context Language Models",
|
| 370 |
+
"text_level": 1,
|
| 371 |
+
"bbox": [
|
| 372 |
+
112,
|
| 373 |
+
579,
|
| 374 |
+
413,
|
| 375 |
+
594
|
| 376 |
+
],
|
| 377 |
+
"page_idx": 2
|
| 378 |
+
},
|
| 379 |
+
{
|
| 380 |
+
"type": "text",
|
| 381 |
+
"text": "Building on advancements in long-context techniques, several long-context LLMs are developed and released. Llama 2 (Touvron et al., 2023) integrates RoPE to expand its context window to 4,000 tokens. Vicuna-v1.5 (Zheng et al., 2023) further extends this capability by fine-tuning Llama 2 on high-quality, extensive conversations, successfully increasing the context window to 16,000 tokens. Longchat (Li* et al., 2023) models condense RoPE to utilize model weights learned in the pretraining stage. ChatGLM2-32k (Zeng et al., 2022) is trained on a 32,000-token context length using position interpolation, showcasing the scalability of this technique.",
|
| 382 |
+
"bbox": [
|
| 383 |
+
112,
|
| 384 |
+
600,
|
| 385 |
+
489,
|
| 386 |
+
824
|
| 387 |
+
],
|
| 388 |
+
"page_idx": 2
|
| 389 |
+
},
|
| 390 |
+
{
|
| 391 |
+
"type": "text",
|
| 392 |
+
"text": "The domain of proprietary language models has seen even more significant advancements in long-context modeling, stepped into the ultra-long con",
|
| 393 |
+
"bbox": [
|
| 394 |
+
112,
|
| 395 |
+
826,
|
| 396 |
+
489,
|
| 397 |
+
873
|
| 398 |
+
],
|
| 399 |
+
"page_idx": 2
|
| 400 |
+
},
|
| 401 |
+
{
|
| 402 |
+
"type": "text",
|
| 403 |
+
"text": "text field. GPT-4-Turbo (OpenAI, 2023) notably extends its context window to an impressive 128,000 tokens. In a similar vein, Claude-2 and Claude-2.1 have achieved context lengths of 100,000 and 200,000 tokens respectively. This expansion allows them to process vast quantities of information, such as hundreds of pages of technical documentation or entire books. Kimi Chat, developed by Moonshot.ai, claims to handle up to 200,000 Chinese characters. However, no existing dataset can evaluate the capability in tackling such long texts.",
|
| 404 |
+
"bbox": [
|
| 405 |
+
507,
|
| 406 |
+
84,
|
| 407 |
+
885,
|
| 408 |
+
261
|
| 409 |
+
],
|
| 410 |
+
"page_idx": 2
|
| 411 |
+
},
|
| 412 |
+
{
|
| 413 |
+
"type": "text",
|
| 414 |
+
"text": "2.3 Long-Context Benchmarks",
|
| 415 |
+
"text_level": 1,
|
| 416 |
+
"bbox": [
|
| 417 |
+
507,
|
| 418 |
+
275,
|
| 419 |
+
769,
|
| 420 |
+
290
|
| 421 |
+
],
|
| 422 |
+
"page_idx": 2
|
| 423 |
+
},
|
| 424 |
+
{
|
| 425 |
+
"type": "text",
|
| 426 |
+
"text": "Efforts to evaluate the long-context capabilities of language models have been intensifying, with a focus primarily on traditional question-answering (QA) and summarization tasks. NarrativeQA (Kočisky et al., 2018) offers a question-answering dataset built on the entire books from Project Gutenberg and movie transcripts. GovReport (Huang et al., 2021) provides a dataset comprising national policy issues, each accompanied by an expert-written summary, thus testing models' ability to distill complex, lengthy documents into concise summaries. Based on existing long-context benchmarks, SCROLLS(Shaham et al., 2022) introduces a suite of datasets that requires models to process and reason over long contexts.",
|
| 427 |
+
"bbox": [
|
| 428 |
+
507,
|
| 429 |
+
298,
|
| 430 |
+
884,
|
| 431 |
+
538
|
| 432 |
+
],
|
| 433 |
+
"page_idx": 2
|
| 434 |
+
},
|
| 435 |
+
{
|
| 436 |
+
"type": "text",
|
| 437 |
+
"text": "Concurrently, L-Eval (An et al., 2023) and Long-Bench (Bai et al., 2023) are designed for comprehensive evaluation of long-context capabilities of LLMs. L-Eval offers a collection of long documents across different domains and provides both close-ended and open-ended tasks. LongBench is a bilingual long context benchmark covering six task categories. Most tasks in these benchmarks are traditional QA and summarization with fixed document, questions and answers. They are inflexible on text length (up to $\\sim 32,000$ tokens), which fall short of adapting to ultra-long context evaluation. Additionally, LongBench uses mostly open-ended tasks with traditional F1 and ROUGE metric that may not align well with human judgments. In contrast, our benchmarks support length-adaptable evaluation, provide sufficient cases and evaluate models using accuracy metrics, avoiding inconsistencies with human evaluation.",
|
| 438 |
+
"bbox": [
|
| 439 |
+
507,
|
| 440 |
+
541,
|
| 441 |
+
885,
|
| 442 |
+
845
|
| 443 |
+
],
|
| 444 |
+
"page_idx": 2
|
| 445 |
+
},
|
| 446 |
+
{
|
| 447 |
+
"type": "text",
|
| 448 |
+
"text": "3 Ada-LEval",
|
| 449 |
+
"text_level": 1,
|
| 450 |
+
"bbox": [
|
| 451 |
+
507,
|
| 452 |
+
860,
|
| 453 |
+
640,
|
| 454 |
+
876
|
| 455 |
+
],
|
| 456 |
+
"page_idx": 2
|
| 457 |
+
},
|
| 458 |
+
{
|
| 459 |
+
"type": "text",
|
| 460 |
+
"text": "In this section, we outline the construction process of Ada-LEval, detailing both the collection",
|
| 461 |
+
"bbox": [
|
| 462 |
+
507,
|
| 463 |
+
890,
|
| 464 |
+
882,
|
| 465 |
+
921
|
| 466 |
+
],
|
| 467 |
+
"page_idx": 2
|
| 468 |
+
},
|
| 469 |
+
{
|
| 470 |
+
"type": "page_footnote",
|
| 471 |
+
"text": "1https://www.reddit.com/r/LocalLLaMA/comments/14lz7j5/ntkawareScaled_ripe Allows_llama_models_to_have/",
|
| 472 |
+
"bbox": [
|
| 473 |
+
112,
|
| 474 |
+
883,
|
| 475 |
+
472,
|
| 476 |
+
920
|
| 477 |
+
],
|
| 478 |
+
"page_idx": 2
|
| 479 |
+
},
|
| 480 |
+
{
|
| 481 |
+
"type": "page_number",
|
| 482 |
+
"text": "3714",
|
| 483 |
+
"bbox": [
|
| 484 |
+
480,
|
| 485 |
+
927,
|
| 486 |
+
519,
|
| 487 |
+
940
|
| 488 |
+
],
|
| 489 |
+
"page_idx": 2
|
| 490 |
+
},
|
| 491 |
+
{
|
| 492 |
+
"type": "text",
|
| 493 |
+
"text": "methodology of our source data and the building procedure of our test cases. Table 1 demonstrates the data statistics of Ada-LEval.",
|
| 494 |
+
"bbox": [
|
| 495 |
+
112,
|
| 496 |
+
84,
|
| 497 |
+
487,
|
| 498 |
+
131
|
| 499 |
+
],
|
| 500 |
+
"page_idx": 3
|
| 501 |
+
},
|
| 502 |
+
{
|
| 503 |
+
"type": "table",
|
| 504 |
+
"img_path": "images/0c4a385a40d902d8d4bcd35e71fdf3d9a93cfdbb1d9dcb3a83c230caa2fe47eb.jpg",
|
| 505 |
+
"table_caption": [],
|
| 506 |
+
"table_footnote": [],
|
| 507 |
+
"table_body": "<table><tr><td colspan=\"4\">TSort</td></tr><tr><td>Setting</td><td>Total #Cases Built</td><td>Max #Tokens</td><td>Avg #Tokens</td></tr><tr><td>2k</td><td>5123</td><td>2000</td><td>1816</td></tr><tr><td>4k</td><td>5451</td><td>4000</td><td>3724</td></tr><tr><td>8k</td><td>5324</td><td>8000</td><td>7663</td></tr><tr><td>16k</td><td>4957</td><td>16000</td><td>15662</td></tr><tr><td>32k</td><td>2206</td><td>32000</td><td>31226</td></tr><tr><td>64k</td><td>1658</td><td>64000</td><td>62407</td></tr><tr><td>128k</td><td>782</td><td>127800</td><td>121488</td></tr></table>",
|
| 508 |
+
"bbox": [
|
| 509 |
+
119,
|
| 510 |
+
141,
|
| 511 |
+
485,
|
| 512 |
+
259
|
| 513 |
+
],
|
| 514 |
+
"page_idx": 3
|
| 515 |
+
},
|
| 516 |
+
{
|
| 517 |
+
"type": "table",
|
| 518 |
+
"img_path": "images/7c975151416ef9415db7294b8d407ef048f4662d0ab3c192cc9bcc76fca4e839.jpg",
|
| 519 |
+
"table_caption": [],
|
| 520 |
+
"table_footnote": [],
|
| 521 |
+
"table_body": "<table><tr><td colspan=\"4\">BestAnswer</td></tr><tr><td>Setting</td><td>Total #Cases Built</td><td>Max #Tokens</td><td>Avg #Tokens</td></tr><tr><td>1k</td><td>7526</td><td>1128</td><td>955</td></tr><tr><td>2k</td><td>7526</td><td>2154</td><td>1983</td></tr><tr><td>4k</td><td>7526</td><td>4215</td><td>3994</td></tr><tr><td>6k</td><td>7526</td><td>6268</td><td>6012</td></tr><tr><td>8k</td><td>7526</td><td>7790</td><td>7518</td></tr><tr><td>12k</td><td>7526</td><td>12389</td><td>12091</td></tr><tr><td>16k</td><td>7526</td><td>15964</td><td>15646</td></tr><tr><td>32k</td><td>200</td><td>32974</td><td>32329</td></tr><tr><td>64k</td><td>200</td><td>64216</td><td>63274</td></tr><tr><td>128k</td><td>200</td><td>127059</td><td>126098</td></tr></table>",
|
| 522 |
+
"bbox": [
|
| 523 |
+
119,
|
| 524 |
+
260,
|
| 525 |
+
484,
|
| 526 |
+
412
|
| 527 |
+
],
|
| 528 |
+
"page_idx": 3
|
| 529 |
+
},
|
| 530 |
+
{
|
| 531 |
+
"type": "text",
|
| 532 |
+
"text": "Table 1: The data statistics of TSort and BestAnswer. We adopt the GPT-4 tokenizer CL100K to calculate token numbers. We use a subset of all built cases for evaluation.",
|
| 533 |
+
"bbox": [
|
| 534 |
+
112,
|
| 535 |
+
420,
|
| 536 |
+
489,
|
| 537 |
+
479
|
| 538 |
+
],
|
| 539 |
+
"page_idx": 3
|
| 540 |
+
},
|
| 541 |
+
{
|
| 542 |
+
"type": "text",
|
| 543 |
+
"text": "3.1 Task Definition",
|
| 544 |
+
"text_level": 1,
|
| 545 |
+
"bbox": [
|
| 546 |
+
112,
|
| 547 |
+
511,
|
| 548 |
+
280,
|
| 549 |
+
524
|
| 550 |
+
],
|
| 551 |
+
"page_idx": 3
|
| 552 |
+
},
|
| 553 |
+
{
|
| 554 |
+
"type": "text",
|
| 555 |
+
"text": "TSort. TSort provides LLMs with N shuffled text segments, extracted from contiguous chapters of a long book. The task for models is to sort these segments into their original sequence. A response is regarded accurate only if it precisely reinstates the segments' initial order. To simplify the challenge and minimize possible confusion, we supply LLMs with adjacent paragraphs from before and after the specified chapters to serve as contextual hints.",
|
| 556 |
+
"bbox": [
|
| 557 |
+
112,
|
| 558 |
+
532,
|
| 559 |
+
487,
|
| 560 |
+
677
|
| 561 |
+
],
|
| 562 |
+
"page_idx": 3
|
| 563 |
+
},
|
| 564 |
+
{
|
| 565 |
+
"type": "text",
|
| 566 |
+
"text": "BestAnswer. Each test case in BestAnswer contains one question and a large amount of possible answers to this question. We consider the answer designated by the original inquirer as the most helpful answer, while LLMs are required to identify this optimal answer among all possible candidates.",
|
| 567 |
+
"bbox": [
|
| 568 |
+
112,
|
| 569 |
+
678,
|
| 570 |
+
489,
|
| 571 |
+
776
|
| 572 |
+
],
|
| 573 |
+
"page_idx": 3
|
| 574 |
+
},
|
| 575 |
+
{
|
| 576 |
+
"type": "text",
|
| 577 |
+
"text": "3.2 Source Data Collection",
|
| 578 |
+
"text_level": 1,
|
| 579 |
+
"bbox": [
|
| 580 |
+
112,
|
| 581 |
+
787,
|
| 582 |
+
341,
|
| 583 |
+
802
|
| 584 |
+
],
|
| 585 |
+
"page_idx": 3
|
| 586 |
+
},
|
| 587 |
+
{
|
| 588 |
+
"type": "text",
|
| 589 |
+
"text": "TSort. For TSort, we sourced our initial data from Booksum (Krysciński et al., 2021), a text summarization dataset derived from the Project Gutenberg, a public book repository consisting of over 60,000 free eBooks spanning various literary genres including novels, plays, short stories, and more. Genres like epistolary literature and poetry are excluded in",
|
| 590 |
+
"bbox": [
|
| 591 |
+
112,
|
| 592 |
+
808,
|
| 593 |
+
489,
|
| 594 |
+
921
|
| 595 |
+
],
|
| 596 |
+
"page_idx": 3
|
| 597 |
+
},
|
| 598 |
+
{
|
| 599 |
+
"type": "text",
|
| 600 |
+
"text": "the construction of TSort benchmark due to their non-sequential nature. To prevent LLMs from exploiting superficial cues, we meticulously remove identifiers such as chapter numbers and annotations from the content.",
|
| 601 |
+
"bbox": [
|
| 602 |
+
507,
|
| 603 |
+
84,
|
| 604 |
+
884,
|
| 605 |
+
162
|
| 606 |
+
],
|
| 607 |
+
"page_idx": 3
|
| 608 |
+
},
|
| 609 |
+
{
|
| 610 |
+
"type": "text",
|
| 611 |
+
"text": "BestAnswer. The BestAnswer benchmark is constructed using threads from Stack Overflow, a platform renowned for its extensive range of programming-related questions and answers. Stack Overflow questions are categorized by multiple tags, indicating the thematic similarity of questions within each tag. To ensure the quality and diversity of our benchmark, we choose 23 different tags, including javascript, python, C++, etc., and collect top 2500 questions from each tag based on popularity.",
|
| 612 |
+
"bbox": [
|
| 613 |
+
507,
|
| 614 |
+
165,
|
| 615 |
+
885,
|
| 616 |
+
342
|
| 617 |
+
],
|
| 618 |
+
"page_idx": 3
|
| 619 |
+
},
|
| 620 |
+
{
|
| 621 |
+
"type": "text",
|
| 622 |
+
"text": "3.3 Test Case Building",
|
| 623 |
+
"text_level": 1,
|
| 624 |
+
"bbox": [
|
| 625 |
+
507,
|
| 626 |
+
353,
|
| 627 |
+
702,
|
| 628 |
+
369
|
| 629 |
+
],
|
| 630 |
+
"page_idx": 3
|
| 631 |
+
},
|
| 632 |
+
{
|
| 633 |
+
"type": "text",
|
| 634 |
+
"text": "For both tasks, we construct test cases according to their token length (measured by GPT-4 tokenizer). We regard token lengths between 1,000 to 16,000 as long-context settings and text lengths exceed 16,000 as ultra-long-context settings.",
|
| 635 |
+
"bbox": [
|
| 636 |
+
507,
|
| 637 |
+
374,
|
| 638 |
+
882,
|
| 639 |
+
455
|
| 640 |
+
],
|
| 641 |
+
"page_idx": 3
|
| 642 |
+
},
|
| 643 |
+
{
|
| 644 |
+
"type": "text",
|
| 645 |
+
"text": "Under long-context settings, TSort cases span test cases with 2k, 4k, 8k, and 16k tokens. For each length, we fix the segment number $N = 4$ and the length upper limit for each text segment and adjacent paragraphs before and after these contiguous chapters. We ensure that each text segment contains complete paragraphs thus no paragraph is sliced in the middle. To build test cases with different contents, we set stride between beginning paragraphs of test cases during construction. After preponding the instructions, we further filter test cases that exceed the token upper bound.",
|
| 646 |
+
"bbox": [
|
| 647 |
+
507,
|
| 648 |
+
456,
|
| 649 |
+
882,
|
| 650 |
+
646
|
| 651 |
+
],
|
| 652 |
+
"page_idx": 3
|
| 653 |
+
},
|
| 654 |
+
{
|
| 655 |
+
"type": "text",
|
| 656 |
+
"text": "For BestAnswer, we generate test cases with 1k, 2k, 4k, 6k, 8k, 12k, and 16k tokens under long-context settings. Test cases contain the distractor answers under corresponding question and adaptable number of distractor answers from other similar questions under each length setting. To make evaluation results directly comparable across different length settings in long context scenarios, we ensure that the questions within the BestAnswer benchmark remain unchanged, regardless of the case length. In BestAnswer, we define the most helpful answer as the answer explicitly accepted by the inquirer, and adopt it as the 'groundtruth answer'. For integrity reasons, we exclude all",
|
| 657 |
+
"bbox": [
|
| 658 |
+
507,
|
| 659 |
+
649,
|
| 660 |
+
882,
|
| 661 |
+
873
|
| 662 |
+
],
|
| 663 |
+
"page_idx": 3
|
| 664 |
+
},
|
| 665 |
+
{
|
| 666 |
+
"type": "page_footnote",
|
| 667 |
+
"text": "2We do not choose the answer with the highest number of votes, since the vote number can be influenced by factors such as the answer posting time and the identity of the respondent,",
|
| 668 |
+
"bbox": [
|
| 669 |
+
507,
|
| 670 |
+
883,
|
| 671 |
+
882,
|
| 672 |
+
921
|
| 673 |
+
],
|
| 674 |
+
"page_idx": 3
|
| 675 |
+
},
|
| 676 |
+
{
|
| 677 |
+
"type": "page_number",
|
| 678 |
+
"text": "3715",
|
| 679 |
+
"bbox": [
|
| 680 |
+
480,
|
| 681 |
+
927,
|
| 682 |
+
519,
|
| 683 |
+
940
|
| 684 |
+
],
|
| 685 |
+
"page_idx": 3
|
| 686 |
+
},
|
| 687 |
+
{
|
| 688 |
+
"type": "text",
|
| 689 |
+
"text": "questions where the corresponding most helpful answer is not text-only. When choosing the distractors, we only consider answers that are provided prior to the accepted answer under corresponding question. Besides, we incorporate answers from other questions with similar tags to the original question to serve as distractor answers.",
|
| 690 |
+
"bbox": [
|
| 691 |
+
112,
|
| 692 |
+
84,
|
| 693 |
+
487,
|
| 694 |
+
197
|
| 695 |
+
],
|
| 696 |
+
"page_idx": 4
|
| 697 |
+
},
|
| 698 |
+
{
|
| 699 |
+
"type": "text",
|
| 700 |
+
"text": "Under ultra-long-context settings, we build test cases with 32k, 64k, and 128k tokens for both tasks. The construction paradigm is similar to the long-context setting. For BestAnswer, since the number of similar questions and the corresponding answers are limited, we relax tag similarity constraints and allow answers of questions with less similar tags to serve as the distractor answers.",
|
| 701 |
+
"bbox": [
|
| 702 |
+
110,
|
| 703 |
+
198,
|
| 704 |
+
489,
|
| 705 |
+
326
|
| 706 |
+
],
|
| 707 |
+
"page_idx": 4
|
| 708 |
+
},
|
| 709 |
+
{
|
| 710 |
+
"type": "text",
|
| 711 |
+
"text": "4 Evaluation Results",
|
| 712 |
+
"text_level": 1,
|
| 713 |
+
"bbox": [
|
| 714 |
+
112,
|
| 715 |
+
343,
|
| 716 |
+
310,
|
| 717 |
+
357
|
| 718 |
+
],
|
| 719 |
+
"page_idx": 4
|
| 720 |
+
},
|
| 721 |
+
{
|
| 722 |
+
"type": "text",
|
| 723 |
+
"text": "4.1 Experiment Setup",
|
| 724 |
+
"text_level": 1,
|
| 725 |
+
"bbox": [
|
| 726 |
+
112,
|
| 727 |
+
372,
|
| 728 |
+
302,
|
| 729 |
+
388
|
| 730 |
+
],
|
| 731 |
+
"page_idx": 4
|
| 732 |
+
},
|
| 733 |
+
{
|
| 734 |
+
"type": "text",
|
| 735 |
+
"text": "We evaluate the following LLMs under long-context settings: 4 proprietary models: (1) GPT-4-Turbo-0125, (2) GPT-4-Turbo-1106 (3) GPT-3.5-Turbo-1106, (4) Claude-2; and 6 open-source models: (5) LongChat-7b-v1.5-32k(Zheng et al., 2023), (6) ChatGLM2-6B-32k(Zeng et al., 2022), (7) ChatGLM3-6B-32k(Zeng et al., 2022), (8) Vicuna7b-v1.5-16k(Zheng et al., 2023), (9) Vicuna-13b-v1.5-16k(Zheng et al., 2023), (10) InternLM2-7b(Cai et al., 2024). Due to the inferior performance of open-source LLMs under long-context settings, only models with good performance (GPT4-Turbo, Claude-2, etc.) are evaluated under ultra-long-context settings.",
|
| 736 |
+
"bbox": [
|
| 737 |
+
110,
|
| 738 |
+
395,
|
| 739 |
+
489,
|
| 740 |
+
620
|
| 741 |
+
],
|
| 742 |
+
"page_idx": 4
|
| 743 |
+
},
|
| 744 |
+
{
|
| 745 |
+
"type": "text",
|
| 746 |
+
"text": "For open-source LLMs, we sample a 1000-testcase subset for evaluation under each length setting. Due to the costly API of state-of-the-art proprietary models (GPT-4-Turbo, Claude-2, etc.), we adopt 200-testcase subset (sampled from the 1000-testcase set) for evaluation under long-context settings, and a 50-testcase subset for evaluation under ultra-long-context settings. All experiments are conducted using the open-source LLM evaluation platform OpenCompass (Contributors, 2023). We adopt the zero-shot setting for all evaluation, and provide a 'random guess' baseline. We also measure the instruction following rate and the copy instruction rate on both tasks.",
|
| 747 |
+
"bbox": [
|
| 748 |
+
112,
|
| 749 |
+
623,
|
| 750 |
+
489,
|
| 751 |
+
847
|
| 752 |
+
],
|
| 753 |
+
"page_idx": 4
|
| 754 |
+
},
|
| 755 |
+
{
|
| 756 |
+
"type": "table",
|
| 757 |
+
"img_path": "images/7b364b45d138b6425e9fd417f744d935470d50922c247d02a0d3c1b4f14a488c.jpg",
|
| 758 |
+
"table_caption": [
|
| 759 |
+
"4.2 Long-Context Evaluation Results"
|
| 760 |
+
],
|
| 761 |
+
"table_footnote": [],
|
| 762 |
+
"table_body": "<table><tr><td>TSort</td><td>2k</td><td>4k</td><td>8k</td><td>16k</td></tr><tr><td>GPT-4-Turbo-0125</td><td>15.5</td><td>16.5</td><td>8.5</td><td>5.5</td></tr><tr><td>GPT-4-Turbo-1106</td><td>18.5</td><td>15.5</td><td>7.5</td><td>3.5</td></tr><tr><td>GPT-3.5-Turbo-1106</td><td>4.0</td><td>4.5</td><td>4.5</td><td>5.5</td></tr><tr><td>Claude-2</td><td>5.0</td><td>5.0</td><td>4.5</td><td>3.0</td></tr><tr><td>LongChat-7b-v1.5-32k</td><td>5.3</td><td>5.0</td><td>3.1</td><td>2.5</td></tr><tr><td>ChatGLM2-6B-32k</td><td>0.9</td><td>0.7</td><td>0.2</td><td>0.9</td></tr><tr><td>ChatGLM3-6B-32k</td><td>2.3</td><td>2.4</td><td>2.0</td><td>0.7</td></tr><tr><td>Vicuna-7b-v1.5-16k</td><td>5.3</td><td>2.2</td><td>2.3</td><td>1.7</td></tr><tr><td>Vicuna-13b-v1.5-16k</td><td>5.4</td><td>5.0</td><td>2.4</td><td>3.1</td></tr><tr><td>InternLM2-7b</td><td>5.1</td><td>3.9</td><td>5.1</td><td>4.3</td></tr><tr><td>Random Guess</td><td>4.2</td><td>4.2</td><td>4.2</td><td>4.2</td></tr></table>",
|
| 763 |
+
"bbox": [
|
| 764 |
+
519,
|
| 765 |
+
116,
|
| 766 |
+
878,
|
| 767 |
+
319
|
| 768 |
+
],
|
| 769 |
+
"page_idx": 4
|
| 770 |
+
},
|
| 771 |
+
{
|
| 772 |
+
"type": "text",
|
| 773 |
+
"text": "Table 2: TSort results under long-context settings. We fix the number of segments $\\mathbf{N} = 4$ for TSort evaluation, thus random guess accuracy is roughly $4.2\\%$ (1/24).",
|
| 774 |
+
"bbox": [
|
| 775 |
+
507,
|
| 776 |
+
330,
|
| 777 |
+
882,
|
| 778 |
+
373
|
| 779 |
+
],
|
| 780 |
+
"page_idx": 4
|
| 781 |
+
},
|
| 782 |
+
{
|
| 783 |
+
"type": "text",
|
| 784 |
+
"text": "TSort. Table 2 displays the test accuracy of various LLMs on the TSort task. This evaluation underscores the complexity of TSort, highlighting its intricate nature that necessitates a comprehensive understanding and reasoning across long text. Under settings from 2,000 to 8,000 tokens, only the most powerful proprietary model GPT-4-Turbo outputs the correct order of texts with a significant higher probability compared to the random baseline. When the context window expands to 16,000, the quality of GPT-4-Turbo's predictions also deteriorates to the random guess level. Other LLMs, encompassing both proprietary models and open-source models, all displaying similar performance compared to random guess (even under the relative short 2k setting). The results indicate that the TSort task posts a severe challenge to existing LLMs.",
|
| 785 |
+
"bbox": [
|
| 786 |
+
507,
|
| 787 |
+
388,
|
| 788 |
+
884,
|
| 789 |
+
663
|
| 790 |
+
],
|
| 791 |
+
"page_idx": 4
|
| 792 |
+
},
|
| 793 |
+
{
|
| 794 |
+
"type": "text",
|
| 795 |
+
"text": "BestAnswer. Table 3 presents the test accuracy of LLMs on BestAnswer. GPT-4-Turbo establishes the state-of-the-art on the BestAnswer benchmark. It achieves an outstanding $44.5\\%$ accuracy under the 16k long-context setting, where around 100 distractor answers exist for each question. Among other proprietary models, Claude-2 achieves the second best accuracy $11\\%$ under the 16k setting. GPT-3.5-Turbo-1106, while outperforming Claude-2 under some relative short settings (2k, 4k, 6k), demonstrates performance similar to random guess under the 16k setting. There is a considerable performance gap between proprietary models and open-source models on BestAnswer. Although some models like Vicuna-13b-v1.5-16k and InternLM2-7b perform well un",
|
| 796 |
+
"bbox": [
|
| 797 |
+
507,
|
| 798 |
+
664,
|
| 799 |
+
884,
|
| 800 |
+
921
|
| 801 |
+
],
|
| 802 |
+
"page_idx": 4
|
| 803 |
+
},
|
| 804 |
+
{
|
| 805 |
+
"type": "page_footnote",
|
| 806 |
+
"text": "in addition to its quality.",
|
| 807 |
+
"bbox": [
|
| 808 |
+
112,
|
| 809 |
+
858,
|
| 810 |
+
265,
|
| 811 |
+
871
|
| 812 |
+
],
|
| 813 |
+
"page_idx": 4
|
| 814 |
+
},
|
| 815 |
+
{
|
| 816 |
+
"type": "page_footnote",
|
| 817 |
+
"text": "3Instruction following rate denotes if the LLM outputs follow the pre-defined format. Copy instruction rate measures if the LLM outputs the same answer as in-context example provides.",
|
| 818 |
+
"bbox": [
|
| 819 |
+
112,
|
| 820 |
+
871,
|
| 821 |
+
485,
|
| 822 |
+
920
|
| 823 |
+
],
|
| 824 |
+
"page_idx": 4
|
| 825 |
+
},
|
| 826 |
+
{
|
| 827 |
+
"type": "page_number",
|
| 828 |
+
"text": "3716",
|
| 829 |
+
"bbox": [
|
| 830 |
+
480,
|
| 831 |
+
928,
|
| 832 |
+
519,
|
| 833 |
+
940
|
| 834 |
+
],
|
| 835 |
+
"page_idx": 4
|
| 836 |
+
},
|
| 837 |
+
{
|
| 838 |
+
"type": "table",
|
| 839 |
+
"img_path": "images/2801f873817d3b04e30d8e49bb8bb94b8205791da3d987b48a290c87d697d84f.jpg",
|
| 840 |
+
"table_caption": [],
|
| 841 |
+
"table_footnote": [],
|
| 842 |
+
"table_body": "<table><tr><td>BestAnswer</td><td>1k</td><td>2k</td><td>4k</td><td>6k</td><td>8k</td><td>12k</td><td>16k</td></tr><tr><td>GPT-4-Turbo-0125</td><td>73.5</td><td>73.5</td><td>65.5</td><td>63.0</td><td>56.5</td><td>52.0</td><td>44.5</td></tr><tr><td>GPT-4-Turbo-1106</td><td>74.0</td><td>73.5</td><td>67.5</td><td>59.5</td><td>53.5</td><td>49.5</td><td>44.0</td></tr><tr><td>GPT-3.5-Turbo-1106</td><td>61.5</td><td>48.5</td><td>41.5</td><td>29.5</td><td>17.0</td><td>2.5</td><td>2.5</td></tr><tr><td>Claude-2</td><td>65.0</td><td>43.5</td><td>23.5</td><td>15.0</td><td>17.0</td><td>12.0</td><td>11.0</td></tr><tr><td>LongChat-7b-v1.5-32k</td><td>32.4</td><td>10.7</td><td>5.7</td><td>3.1</td><td>1.9</td><td>1.6</td><td>0.8</td></tr><tr><td>ChatGLM2-6B-32k</td><td>31.2</td><td>10.9</td><td>4.5</td><td>1.6</td><td>1.6</td><td>0.0</td><td>0.3</td></tr><tr><td>ChatGLM3-6B-32k</td><td>39.8</td><td>18.8</td><td>9.0</td><td>5.0</td><td>3.4</td><td>0.9</td><td>0.5</td></tr><tr><td>Vicuna-7b-v1.5-16k</td><td>37.0</td><td>11.1</td><td>5.8</td><td>3.2</td><td>1.8</td><td>1.9</td><td>1.0</td></tr><tr><td>Vicuna-13b-v1.5-16k</td><td>53.4</td><td>29.2</td><td>13.1</td><td>4.3</td><td>2.2</td><td>1.4</td><td>0.9</td></tr><tr><td>InternLM2-7b</td><td>58.6</td><td>49.5</td><td>33.9</td><td>12.3</td><td>13.4</td><td>2.0</td><td>0.8</td></tr><tr><td>Random Guess</td><td>26.7</td><td>10.1</td><td>4.5</td><td>3.0</td><td>2.3</td><td>1.4</td><td>1.1</td></tr></table>",
|
| 843 |
+
"bbox": [
|
| 844 |
+
201,
|
| 845 |
+
80,
|
| 846 |
+
803,
|
| 847 |
+
274
|
| 848 |
+
],
|
| 849 |
+
"page_idx": 5
|
| 850 |
+
},
|
| 851 |
+
{
|
| 852 |
+
"type": "text",
|
| 853 |
+
"text": "der short settings, a dramatic accuracy decline can be observed when text length becomes larger.",
|
| 854 |
+
"bbox": [
|
| 855 |
+
112,
|
| 856 |
+
353,
|
| 857 |
+
485,
|
| 858 |
+
385
|
| 859 |
+
],
|
| 860 |
+
"page_idx": 5
|
| 861 |
+
},
|
| 862 |
+
{
|
| 863 |
+
"type": "table",
|
| 864 |
+
"img_path": "images/4c7b19293e3dd97f197714b904cac337bee8d07436e940a044970d6f8e853b3f.jpg",
|
| 865 |
+
"table_caption": [
|
| 866 |
+
"Table 3: BestAnswer results under long-context settings. For a question with N candidate answers, we define the random guess accuracy as $1 / \\mathbf{N}$ . The random guess accuracy over a long-context setting is the average of random guess accuracy for all questions within the test set."
|
| 867 |
+
],
|
| 868 |
+
"table_footnote": [],
|
| 869 |
+
"table_body": "<table><tr><td>CopyInst Rate</td><td>2k</td><td>4k</td><td>8k</td><td>16k</td></tr><tr><td>GPT-4-Turbo-1106</td><td>25.0</td><td>22.0</td><td>10.5</td><td>1.0</td></tr><tr><td>GPT-3.5-Turbo-1106</td><td>30.0</td><td>25.5</td><td>64.5</td><td>73.3</td></tr><tr><td>Claude-2</td><td>99.5</td><td>95.0</td><td>97.4</td><td>96.9</td></tr><tr><td>Expectation</td><td>5.0</td><td>5.0</td><td>5.0</td><td>5.5</td></tr><tr><td>LongChat-7b-v1.5-32k</td><td>100.0</td><td>99.8</td><td>99.1</td><td>100.0</td></tr><tr><td>ChatGLM2-6B-32k</td><td>11.3</td><td>13.8</td><td>10.5</td><td>81.3</td></tr><tr><td>ChatGLM3-6B-32k</td><td>21.6</td><td>54.8</td><td>88.0</td><td>88.1</td></tr><tr><td>Vicuna-7b-v1.5-16k</td><td>100.0</td><td>100.0</td><td>59.4</td><td>33.3</td></tr><tr><td>Vicuna-13b-v1.5-16k</td><td>96.6</td><td>99.0</td><td>12.2</td><td>3.1</td></tr><tr><td>Expectation</td><td>5.3</td><td>5.0</td><td>5.4</td><td>5.2</td></tr></table>",
|
| 870 |
+
"bbox": [
|
| 871 |
+
122,
|
| 872 |
+
398,
|
| 873 |
+
482,
|
| 874 |
+
570
|
| 875 |
+
],
|
| 876 |
+
"page_idx": 5
|
| 877 |
+
},
|
| 878 |
+
{
|
| 879 |
+
"type": "text",
|
| 880 |
+
"text": "4.3 Error Breakdown",
|
| 881 |
+
"text_level": 1,
|
| 882 |
+
"bbox": [
|
| 883 |
+
112,
|
| 884 |
+
671,
|
| 885 |
+
302,
|
| 886 |
+
686
|
| 887 |
+
],
|
| 888 |
+
"page_idx": 5
|
| 889 |
+
},
|
| 890 |
+
{
|
| 891 |
+
"type": "text",
|
| 892 |
+
"text": "We further analyze the error instances on TSort and BestAnswer, and find that most errors can be attributed to two categories: 1. The LLM fails to follow the provided instruction and does not output a valid answer; 2. The LLM does output a valid answer. However, it simply copies the example answer we provide in the in-context example. Figure 2 display instruction following rate on TSort and BestAnswer. Tables 4 and 5 provide detailed statistics about the copy instruction rate on TSort and BestAnswer.",
|
| 893 |
+
"bbox": [
|
| 894 |
+
112,
|
| 895 |
+
694,
|
| 896 |
+
487,
|
| 897 |
+
870
|
| 898 |
+
],
|
| 899 |
+
"page_idx": 5
|
| 900 |
+
},
|
| 901 |
+
{
|
| 902 |
+
"type": "table",
|
| 903 |
+
"img_path": "images/64a3cfbf9238c2120843bb7dfb4c4be15c5742107409d97f46a198a4866789f4.jpg",
|
| 904 |
+
"table_caption": [
|
| 905 |
+
"Table 4: The copy instruction rate of LLMs on TSort under long-context settings. Expectation means the ratio of test cases for which the in-context example answer is exactly the correct one."
|
| 906 |
+
],
|
| 907 |
+
"table_footnote": [],
|
| 908 |
+
"table_body": "<table><tr><td>CopyInst Rate</td><td>1k</td><td>2k</td><td>4k</td><td>6k</td><td>8k</td><td>12k</td><td>16k</td></tr><tr><td>GPT-4-Turbo-1106</td><td>12.5</td><td>8.5</td><td>5.0</td><td>5.5</td><td>6.0</td><td>2.0</td><td>2.0</td></tr><tr><td>GPT-3.5-Turbo-1106</td><td>16.5</td><td>22.5</td><td>18.5</td><td>16.0</td><td>11.5</td><td>2.0</td><td>0.0</td></tr><tr><td>Claude-2</td><td>21.5</td><td>25.5</td><td>40.5</td><td>41.0</td><td>42.5</td><td>49.0</td><td>55.0</td></tr><tr><td>Expectation</td><td>13.0</td><td>7.0</td><td>3.0</td><td>2.0</td><td>2.5</td><td>1.5</td><td>1.5</td></tr><tr><td>LongChat-7b-v1.5-32k</td><td>67.4</td><td>94.7</td><td>89.5</td><td>57.8</td><td>70.6</td><td>49.4</td><td>13.0</td></tr><tr><td>ChatGLM2-6B-32k</td><td>36.5</td><td>43.7</td><td>35.8</td><td>27.2</td><td>24.4</td><td>35.5</td><td>44.7</td></tr><tr><td>ChatGLM3-6B-32k</td><td>47.9</td><td>66.1</td><td>33.3</td><td>30.4</td><td>22.5</td><td>24.8</td><td>16.7</td></tr><tr><td>Vicuna-7b-v1.5-16k</td><td>63.1</td><td>96.2</td><td>91.8</td><td>57.9</td><td>66.6</td><td>27.8</td><td>17.9</td></tr><tr><td>Vicuna-13b-v1.5-16k</td><td>27.8</td><td>45.8</td><td>55.3</td><td>19.8</td><td>3.4</td><td>5.6</td><td>11.1</td></tr><tr><td>Expectation</td><td>14.4</td><td>10.0</td><td>5.1</td><td>2.3</td><td>1.7</td><td>1.3</td><td>1.2</td></tr></table>",
|
| 909 |
+
"bbox": [
|
| 910 |
+
515,
|
| 911 |
+
350,
|
| 912 |
+
880,
|
| 913 |
+
458
|
| 914 |
+
],
|
| 915 |
+
"page_idx": 5
|
| 916 |
+
},
|
| 917 |
+
{
|
| 918 |
+
"type": "text",
|
| 919 |
+
"text": "Table 5: The copy instruction rate of LLMs on BestAnswer under long-context settings. Expectation means the ratio of test cases for which the in-context example answer is exactly the correct one.",
|
| 920 |
+
"bbox": [
|
| 921 |
+
507,
|
| 922 |
+
468,
|
| 923 |
+
882,
|
| 924 |
+
526
|
| 925 |
+
],
|
| 926 |
+
"page_idx": 5
|
| 927 |
+
},
|
| 928 |
+
{
|
| 929 |
+
"type": "text",
|
| 930 |
+
"text": "The state-of-the-art GPT-4-Turbo maintains a relatively low copy instruction rate and impeccable instruction following rate on both tasks. Error instances of Claude-2, LongChat and Vicuna models are predominantly due to elevated Copy Instruction Rate, while ChatGLM models suffer from low instruction following rate. It is worth noting that all models, with the sole exception of GPT-4-Turbo, find it more difficult to follow the instruction on both tasks as text length increases.",
|
| 931 |
+
"bbox": [
|
| 932 |
+
507,
|
| 933 |
+
551,
|
| 934 |
+
882,
|
| 935 |
+
712
|
| 936 |
+
],
|
| 937 |
+
"page_idx": 5
|
| 938 |
+
},
|
| 939 |
+
{
|
| 940 |
+
"type": "text",
|
| 941 |
+
"text": "4.4 Ultra-Long-Context Evaluation Results",
|
| 942 |
+
"text_level": 1,
|
| 943 |
+
"bbox": [
|
| 944 |
+
507,
|
| 945 |
+
722,
|
| 946 |
+
865,
|
| 947 |
+
739
|
| 948 |
+
],
|
| 949 |
+
"page_idx": 5
|
| 950 |
+
},
|
| 951 |
+
{
|
| 952 |
+
"type": "text",
|
| 953 |
+
"text": "We evaluate the following proprietary models under ultra-long-context settings. (1) GPT-4-Turbo-0125 (2) GPT-4-Turbo-1106 (3) Claude-2. (4) Claude-2.1. We also evaluate InternLM2-7b on BestAnswer benchmark under ultra-long-context settings. Due to high API calling expense, we test 50 samples under each ultra-long context setting. Table 6 demonstrates the result.",
|
| 954 |
+
"bbox": [
|
| 955 |
+
505,
|
| 956 |
+
744,
|
| 957 |
+
884,
|
| 958 |
+
871
|
| 959 |
+
],
|
| 960 |
+
"page_idx": 5
|
| 961 |
+
},
|
| 962 |
+
{
|
| 963 |
+
"type": "text",
|
| 964 |
+
"text": "Though the evaluated models claim that they can understand long text up to $100,000+$ tokens (a whole book with hundreds of pages, e.g.), they suf",
|
| 965 |
+
"bbox": [
|
| 966 |
+
507,
|
| 967 |
+
873,
|
| 968 |
+
882,
|
| 969 |
+
921
|
| 970 |
+
],
|
| 971 |
+
"page_idx": 5
|
| 972 |
+
},
|
| 973 |
+
{
|
| 974 |
+
"type": "page_footnote",
|
| 975 |
+
"text": "${}^{4}$ A valid answer contains a permutation of $\\mathrm{N}$ segment numbers on TSort and at least one designation of answers on BestAnswer.",
|
| 976 |
+
"bbox": [
|
| 977 |
+
112,
|
| 978 |
+
882,
|
| 979 |
+
487,
|
| 980 |
+
919
|
| 981 |
+
],
|
| 982 |
+
"page_idx": 5
|
| 983 |
+
},
|
| 984 |
+
{
|
| 985 |
+
"type": "page_number",
|
| 986 |
+
"text": "3717",
|
| 987 |
+
"bbox": [
|
| 988 |
+
480,
|
| 989 |
+
927,
|
| 990 |
+
519,
|
| 991 |
+
940
|
| 992 |
+
],
|
| 993 |
+
"page_idx": 5
|
| 994 |
+
},
|
| 995 |
+
{
|
| 996 |
+
"type": "image",
|
| 997 |
+
"img_path": "images/d342b6575a50bb902030f8188fb696b1cfb0bde6f37336fccfc61d023eefefc3.jpg",
|
| 998 |
+
"image_caption": [
|
| 999 |
+
"Figure 2: The instruction following rate of LLMs on TSort (Left) and BestAnswer (Right) under long-context settings. GPT-4-Turbo on TSort and all proprietary models on BestAnswer achieve $100\\%$ instruction following rate across all long-context settings, thus not displayed."
|
| 1000 |
+
],
|
| 1001 |
+
"image_footnote": [],
|
| 1002 |
+
"bbox": [
|
| 1003 |
+
132,
|
| 1004 |
+
84,
|
| 1005 |
+
863,
|
| 1006 |
+
303
|
| 1007 |
+
],
|
| 1008 |
+
"page_idx": 6
|
| 1009 |
+
},
|
| 1010 |
+
{
|
| 1011 |
+
"type": "table",
|
| 1012 |
+
"img_path": "images/5348884f3ce474b354d57caf61d14319297e2e9046969b1e4b9b9882d1a447b0.jpg",
|
| 1013 |
+
"table_caption": [],
|
| 1014 |
+
"table_footnote": [],
|
| 1015 |
+
"table_body": "<table><tr><td>Benchmark</td><td>Model</td><td>32k</td><td>64k</td><td>128k</td></tr><tr><td rowspan=\"5\">TSort</td><td>GPT-4-Turbo-0125</td><td>2.0</td><td>4.0</td><td>2.0</td></tr><tr><td>GPT-4-Turbo-1106</td><td>6.0</td><td>6.0</td><td>6.0</td></tr><tr><td>Claude-2</td><td>0.0</td><td>0.0</td><td>/</td></tr><tr><td>Claude-2.1</td><td>0.0</td><td>0.0</td><td>0.0</td></tr><tr><td>Random Guess</td><td>4.2</td><td>4.2</td><td>4.2</td></tr><tr><td rowspan=\"6\">BestAnswer</td><td>GPT-4-Turbo-0125</td><td>30.0</td><td>0.0</td><td>0.0</td></tr><tr><td>GPT-4-Turbo-1106</td><td>16.0</td><td>0.0</td><td>0.0</td></tr><tr><td>Claude-2</td><td>4.0</td><td>0.0</td><td>/</td></tr><tr><td>Claude-2.1</td><td>4.0</td><td>0.0</td><td>0.0</td></tr><tr><td>InternLM2-7b</td><td>0.5</td><td>0.5</td><td>0.0</td></tr><tr><td>Random Guess</td><td>0.6</td><td>0.3</td><td>0.1</td></tr></table>",
|
| 1016 |
+
"bbox": [
|
| 1017 |
+
122,
|
| 1018 |
+
376,
|
| 1019 |
+
485,
|
| 1020 |
+
546
|
| 1021 |
+
],
|
| 1022 |
+
"page_idx": 6
|
| 1023 |
+
},
|
| 1024 |
+
{
|
| 1025 |
+
"type": "text",
|
| 1026 |
+
"text": "fer from a dramatic decline on their performance under ultra-long-context settings, comparing to their long-context performance. For the TSort task, GPT-4-Turbo is able to achieve a random guess level accuracy, while Claude fails to give any correct answers. For BestAnswer, the performance of all three models fall sharply from 16k to 32k text length. Meanwhile, they can not give any correct answer when the text length is greater than 32k.",
|
| 1027 |
+
"bbox": [
|
| 1028 |
+
112,
|
| 1029 |
+
611,
|
| 1030 |
+
489,
|
| 1031 |
+
755
|
| 1032 |
+
],
|
| 1033 |
+
"page_idx": 6
|
| 1034 |
+
},
|
| 1035 |
+
{
|
| 1036 |
+
"type": "text",
|
| 1037 |
+
"text": "4.5 Ablation Study",
|
| 1038 |
+
"text_level": 1,
|
| 1039 |
+
"bbox": [
|
| 1040 |
+
112,
|
| 1041 |
+
768,
|
| 1042 |
+
280,
|
| 1043 |
+
783
|
| 1044 |
+
],
|
| 1045 |
+
"page_idx": 6
|
| 1046 |
+
},
|
| 1047 |
+
{
|
| 1048 |
+
"type": "text",
|
| 1049 |
+
"text": "4.5.1 Perplexity Evaluation on TSort",
|
| 1050 |
+
"text_level": 1,
|
| 1051 |
+
"bbox": [
|
| 1052 |
+
112,
|
| 1053 |
+
789,
|
| 1054 |
+
420,
|
| 1055 |
+
804
|
| 1056 |
+
],
|
| 1057 |
+
"page_idx": 6
|
| 1058 |
+
},
|
| 1059 |
+
{
|
| 1060 |
+
"type": "text",
|
| 1061 |
+
"text": "Perplexity (PPL) evaluation is frequently adopted to assess the capability of LLMs. During inference, models compute the perplexity of multiple candidates and the one with the lowest perplexity is selected as the inference result. For TSort, we create 24 candidates for perplexity computation, each candidate is a permutation of the 4 text segments.",
|
| 1062 |
+
"bbox": [
|
| 1063 |
+
112,
|
| 1064 |
+
808,
|
| 1065 |
+
489,
|
| 1066 |
+
921
|
| 1067 |
+
],
|
| 1068 |
+
"page_idx": 6
|
| 1069 |
+
},
|
| 1070 |
+
{
|
| 1071 |
+
"type": "text",
|
| 1072 |
+
"text": "We conduct PPL-based evaluation for open-source LLMs on 2k, 4k and 8k text length settings. Table 7 exhibits the PPL-Eval result on TSort. When text segments are arranged in the correct order, a significantly lower perplexity score can usually be observed<sup>5</sup>, resulting in the high TSort accuracy. However, when the sorting task is presented as QAs where LLMs are asked to directly output the correct order, the performance significantly deteriorates, indicating the limited instruction following capabilities of existing LLMs.",
|
| 1073 |
+
"bbox": [
|
| 1074 |
+
507,
|
| 1075 |
+
380,
|
| 1076 |
+
884,
|
| 1077 |
+
557
|
| 1078 |
+
],
|
| 1079 |
+
"page_idx": 6
|
| 1080 |
+
},
|
| 1081 |
+
{
|
| 1082 |
+
"type": "table",
|
| 1083 |
+
"img_path": "images/7024413ba03cf4d4a9a1e17c1ba8fe892989c458ed5e73aa882e3fe4ebee5d94.jpg",
|
| 1084 |
+
"table_caption": [
|
| 1085 |
+
"Table 6: Results of LLMs on TSort and BestAnswer benchmarks in ultra-long context settings."
|
| 1086 |
+
],
|
| 1087 |
+
"table_footnote": [],
|
| 1088 |
+
"table_body": "<table><tr><td>TSort (PPL Eval)</td><td>2k</td><td>4k</td><td>8k</td></tr><tr><td>LongChat-7b-v1.5-32k</td><td>60.9</td><td>68.3</td><td>77.4</td></tr><tr><td>ChatGLM2-6B-32k</td><td>40.5</td><td>53.5</td><td>57.5</td></tr><tr><td>ChatGLM3-6B-32k</td><td>50.1</td><td>57.0</td><td>59.3</td></tr><tr><td>Vicuna-7b-v1.5-16k</td><td>70.1</td><td>78.3</td><td>77.7</td></tr><tr><td>Vicuna-13b-v1.5-16k</td><td>79.3</td><td>86.7</td><td>89.2</td></tr><tr><td>Random Guess</td><td>4.2</td><td>4.2</td><td>4.2</td></tr></table>",
|
| 1089 |
+
"bbox": [
|
| 1090 |
+
517,
|
| 1091 |
+
569,
|
| 1092 |
+
878,
|
| 1093 |
+
686
|
| 1094 |
+
],
|
| 1095 |
+
"page_idx": 6
|
| 1096 |
+
},
|
| 1097 |
+
{
|
| 1098 |
+
"type": "text",
|
| 1099 |
+
"text": "Table 7: Perplexity Evaluation Results on TSort for open-source LLMs.",
|
| 1100 |
+
"bbox": [
|
| 1101 |
+
507,
|
| 1102 |
+
694,
|
| 1103 |
+
882,
|
| 1104 |
+
722
|
| 1105 |
+
],
|
| 1106 |
+
"page_idx": 6
|
| 1107 |
+
},
|
| 1108 |
+
{
|
| 1109 |
+
"type": "text",
|
| 1110 |
+
"text": "4.5.2 Position Bias in BestAnswer",
|
| 1111 |
+
"text_level": 1,
|
| 1112 |
+
"bbox": [
|
| 1113 |
+
507,
|
| 1114 |
+
751,
|
| 1115 |
+
789,
|
| 1116 |
+
765
|
| 1117 |
+
],
|
| 1118 |
+
"page_idx": 6
|
| 1119 |
+
},
|
| 1120 |
+
{
|
| 1121 |
+
"type": "text",
|
| 1122 |
+
"text": "To study the position bias of existing LLMs, in BestAnswer, we keep questions and answer candidates the same and alter the position of groundtruth answers. Specifically, we manually set the groundtruth answer at the beginning, in the middle, or at the rear of all answers and then perform the evaluation. Table 8 displays the evaluation",
|
| 1123 |
+
"bbox": [
|
| 1124 |
+
507,
|
| 1125 |
+
771,
|
| 1126 |
+
884,
|
| 1127 |
+
883
|
| 1128 |
+
],
|
| 1129 |
+
"page_idx": 6
|
| 1130 |
+
},
|
| 1131 |
+
{
|
| 1132 |
+
"type": "page_footnote",
|
| 1133 |
+
"text": "One potential cause is that the chapters have been used for pretraining.",
|
| 1134 |
+
"bbox": [
|
| 1135 |
+
507,
|
| 1136 |
+
894,
|
| 1137 |
+
882,
|
| 1138 |
+
921
|
| 1139 |
+
],
|
| 1140 |
+
"page_idx": 6
|
| 1141 |
+
},
|
| 1142 |
+
{
|
| 1143 |
+
"type": "page_number",
|
| 1144 |
+
"text": "3718",
|
| 1145 |
+
"bbox": [
|
| 1146 |
+
480,
|
| 1147 |
+
927,
|
| 1148 |
+
519,
|
| 1149 |
+
940
|
| 1150 |
+
],
|
| 1151 |
+
"page_idx": 6
|
| 1152 |
+
},
|
| 1153 |
+
{
|
| 1154 |
+
"type": "text",
|
| 1155 |
+
"text": "results. All models demonstrate significant position bias in choosing the most helpful answer. Most models achieve much better accuracy when the most helpful answer presents at the beginning. Claude-2 has some unique behaviors. It performs the best when the groundtruth is positioned at the rear across 4 of 5 different settings. As the input length increases, the position bias becomes more obvious. For instance, Vicuna-7b-v1.5-16k demonstrates relatively uniform accuracy under the 1k setting. However, when the input length extends to 16k tokens, the model's performance remains stable only when the best answer is at the front.",
|
| 1156 |
+
"bbox": [
|
| 1157 |
+
112,
|
| 1158 |
+
84,
|
| 1159 |
+
489,
|
| 1160 |
+
294
|
| 1161 |
+
],
|
| 1162 |
+
"page_idx": 7
|
| 1163 |
+
},
|
| 1164 |
+
{
|
| 1165 |
+
"type": "table",
|
| 1166 |
+
"img_path": "images/03480100c8e0296a665b54b6412e94034ea85b1bca38400e98795b8ccdf0429c.jpg",
|
| 1167 |
+
"table_caption": [],
|
| 1168 |
+
"table_footnote": [
|
| 1169 |
+
"Table 8: Results of LLMs on BestAnswer where the best answer is set at the front, in the middle and at the rear of all answers. Pos denotes the position of the best answer."
|
| 1170 |
+
],
|
| 1171 |
+
"table_body": "<table><tr><td>BestAnswer</td><td>Pos</td><td>1k</td><td>2k</td><td>4k</td><td>8k</td><td>16k</td></tr><tr><td rowspan=\"3\">GPT-4-Turbo-1106</td><td>front</td><td>76.5</td><td>82.5</td><td>86.5</td><td>90.0</td><td>82.0</td></tr><tr><td>mid</td><td>74.5</td><td>68.0</td><td>60.0</td><td>38.0</td><td>38.5</td></tr><tr><td>rear</td><td>57.5</td><td>46.6</td><td>44.0</td><td>40.5</td><td>26.5</td></tr><tr><td rowspan=\"3\">GPT-3.5-Turbo-1106</td><td>front</td><td>77.0</td><td>80.5</td><td>77.0</td><td>46.5</td><td>2.5</td></tr><tr><td>mid</td><td>64.5</td><td>48.5</td><td>32.0</td><td>9.5</td><td>0.5</td></tr><tr><td>rear</td><td>37.5</td><td>19.0</td><td>8.5</td><td>6.0</td><td>3.5</td></tr><tr><td rowspan=\"3\">Claude-2</td><td>front</td><td>34.0</td><td>19.0</td><td>14.5</td><td>50.0</td><td>6.0</td></tr><tr><td>mid</td><td>49.0</td><td>35.5</td><td>21.5</td><td>13.0</td><td>5.0</td></tr><tr><td>rear</td><td>59.0</td><td>36.5</td><td>26.0</td><td>11.0</td><td>9.5</td></tr><tr><td rowspan=\"3\">LongChat-7b-v1.5-32k</td><td>front</td><td>24.1</td><td>5.0</td><td>12.1</td><td>33.6</td><td>29.0</td></tr><tr><td>mid</td><td>32.7</td><td>13.6</td><td>0.2</td><td>0.2</td><td>0.0</td></tr><tr><td>rear</td><td>29.8</td><td>1.9</td><td>0.0</td><td>0.1</td><td>0.1</td></tr><tr><td rowspan=\"3\">ChatGLM2-6B-32k</td><td>front</td><td>30.0</td><td>31.5</td><td>46.2</td><td>10.5</td><td>0.5</td></tr><tr><td>mid</td><td>27.7</td><td>10.4</td><td>1.0</td><td>0.1</td><td>0.1</td></tr><tr><td>rear</td><td>28.5</td><td>12.4</td><td>2.6</td><td>4.1</td><td>0.0</td></tr><tr><td rowspan=\"3\">ChatGLM3-6B-32k</td><td>front</td><td>48.9</td><td>34.3</td><td>37.6</td><td>35.8</td><td>19.0</td></tr><tr><td>mid</td><td>41.9</td><td>22.3</td><td>5.3</td><td>0.9</td><td>0.1</td></tr><tr><td>rear</td><td>28.8</td><td>5.4</td><td>3.7</td><td>8.8</td><td>2.9</td></tr><tr><td rowspan=\"3\">Vicuna-7b-v1.5-16k</td><td>front</td><td>29.3</td><td>8.9</td><td>14.0</td><td>37.6</td><td>25.4</td></tr><tr><td>mid</td><td>32.8</td><td>13.6</td><td>0.0</td><td>0.0</td><td>0.2</td></tr><tr><td>rear</td><td>34.2</td><td>2.1</td><td>0.0</td><td>0.0</td><td>0.7</td></tr><tr><td rowspan=\"3\">Vicuna-13b-v1.5-16k</td><td>front</td><td>52.5</td><td>51.4</td><td>58.6</td><td>81.7</td><td>11.8</td></tr><tr><td>mid</td><td>64.5</td><td>29.2</td><td>1.5</td><td>0.5</td><td>0.3</td></tr><tr><td>rear</td><td>34.2</td><td>2.4</td><td>0.0</td><td>0.0</td><td>13.4</td></tr></table>",
|
| 1172 |
+
"bbox": [
|
| 1173 |
+
122,
|
| 1174 |
+
305,
|
| 1175 |
+
485,
|
| 1176 |
+
615
|
| 1177 |
+
],
|
| 1178 |
+
"page_idx": 7
|
| 1179 |
+
},
|
| 1180 |
+
{
|
| 1181 |
+
"type": "text",
|
| 1182 |
+
"text": "4.5.3 Scalable Position Embeddings",
|
| 1183 |
+
"text_level": 1,
|
| 1184 |
+
"bbox": [
|
| 1185 |
+
112,
|
| 1186 |
+
708,
|
| 1187 |
+
410,
|
| 1188 |
+
724
|
| 1189 |
+
],
|
| 1190 |
+
"page_idx": 7
|
| 1191 |
+
},
|
| 1192 |
+
{
|
| 1193 |
+
"type": "text",
|
| 1194 |
+
"text": "Scalable position embeddings have shown their value in extending context window while requiring minimal or no fine-tuning steps. Existing position embedding methods for context window extension can be categorized into two major categories: position interpolation and length extrapolation. NTK-aware Scaled RoPE utilizes the advantage of both methods by changing the base of RoPE. ReRoPE and Leaky ReRoPE (Su, 2023) design a window size to control the application of scalable position embeddings directly. We conduct our study on Vicuna-v1.5 models (Zheng et al., 2023), which",
|
| 1195 |
+
"bbox": [
|
| 1196 |
+
112,
|
| 1197 |
+
728,
|
| 1198 |
+
489,
|
| 1199 |
+
921
|
| 1200 |
+
],
|
| 1201 |
+
"page_idx": 7
|
| 1202 |
+
},
|
| 1203 |
+
{
|
| 1204 |
+
"type": "text",
|
| 1205 |
+
"text": "are Llama 2 fine-tuned with 4k context window. We adopt original models (4k context window) as the baseline across all settings. Table 9 shows the result of different position embedding methods on the BestAnswer benchmark. Our findings indicate that scalable position embeddings do improve the long-context modeling capability. All methods enhance the accuracy under the 8k setting, which is beyond the original context window. Concurrently, the model performance under short settings (1k, e.g.) is basically retained. NTK-aware Scaled RoPE diminishes performance on 1k context length, but outperforms other two methods on longer context. The advantage of these methods is more obvious on Vicuna-13b-v1.5. Moreover, comparing to their 16k versions, which utilize Flash Attention and are further trained on high-quality 16k length conversation data, advanced scalable position embeddings still achieve comparable performance.",
|
| 1206 |
+
"bbox": [
|
| 1207 |
+
507,
|
| 1208 |
+
84,
|
| 1209 |
+
884,
|
| 1210 |
+
405
|
| 1211 |
+
],
|
| 1212 |
+
"page_idx": 7
|
| 1213 |
+
},
|
| 1214 |
+
{
|
| 1215 |
+
"type": "table",
|
| 1216 |
+
"img_path": "images/7347c23eded3962b84f0f8b0af5a36cc1ca57c0812b96cd4d32d1f267247ddf2.jpg",
|
| 1217 |
+
"table_caption": [],
|
| 1218 |
+
"table_footnote": [
|
| 1219 |
+
"Table 9: Results of Vicuna-v1.5 with different context window extrapolation methods on BestAnswer. 'Original (4k) / (16k)' denotes the original Vicuna model trained with 4k / 16k context lengths. In the reported 'X/Y', X indicates the accuracy while Y indicates the accuracy which cases failed to follow the instruction are excluded."
|
| 1220 |
+
],
|
| 1221 |
+
"table_body": "<table><tr><td>Vicuna-7b-v1.5</td><td>1k</td><td>2k</td><td>4k</td><td>8k</td></tr><tr><td>ReRoPE</td><td>39.6/39.6</td><td>11.6/11.6</td><td>4.7/5.4</td><td>2.3/3.2</td></tr><tr><td>Leaky ReRoPE</td><td>39.9/39.9</td><td>11.2/11.2</td><td>5.1/5.7</td><td>1.3/2.0</td></tr><tr><td>NTK</td><td>32.5/32.5</td><td>10.7/10.7</td><td>5.8/5.8</td><td>3.9/3.9</td></tr><tr><td>Original(4k)</td><td>39.5/39.5</td><td>9.8/11.0</td><td>4.2/5.5</td><td>0.0/0.0</td></tr><tr><td>Original(16k)</td><td>37.0/39.5</td><td>11.1/11.1</td><td>5.8/5.8</td><td>2.5/2.7</td></tr><tr><td>Vicuna-13b-v1.5</td><td>1k</td><td>2k</td><td>4k</td><td>8k</td></tr><tr><td>ReRoPE</td><td>49.2/49.2</td><td>22.5/22.5</td><td>9.2/10.0</td><td>1.5/2.8</td></tr><tr><td>Leaky ReRoPE</td><td>49.3/49.3</td><td>23.8/23.8</td><td>8.7/9.8</td><td>1.3/2.6</td></tr><tr><td>NTK</td><td>43.8/43.8</td><td>23.0/23.0</td><td>11.1/11.1</td><td>2.3/2.3</td></tr><tr><td>Original(4k)</td><td>49.1/49.1</td><td>17.7/17.7</td><td>5.9/5.9</td><td>0.1/1.0</td></tr><tr><td>Original(16k)</td><td>53.4/53.4</td><td>29.2/29.2</td><td>13.1/13.5</td><td>2.6/2.7</td></tr></table>",
|
| 1222 |
+
"bbox": [
|
| 1223 |
+
514,
|
| 1224 |
+
416,
|
| 1225 |
+
878,
|
| 1226 |
+
565
|
| 1227 |
+
],
|
| 1228 |
+
"page_idx": 7
|
| 1229 |
+
},
|
| 1230 |
+
{
|
| 1231 |
+
"type": "text",
|
| 1232 |
+
"text": "4.5.4 Comparison with Other Long-Context Benchmarks",
|
| 1233 |
+
"text_level": 1,
|
| 1234 |
+
"bbox": [
|
| 1235 |
+
507,
|
| 1236 |
+
706,
|
| 1237 |
+
873,
|
| 1238 |
+
736
|
| 1239 |
+
],
|
| 1240 |
+
"page_idx": 7
|
| 1241 |
+
},
|
| 1242 |
+
{
|
| 1243 |
+
"type": "text",
|
| 1244 |
+
"text": "We compare Ada-LEval with other long-context benchmarks to validate that our benchmarks require much overall text understanding to complete the task than traditional long-context benchmarks.",
|
| 1245 |
+
"bbox": [
|
| 1246 |
+
507,
|
| 1247 |
+
741,
|
| 1248 |
+
882,
|
| 1249 |
+
806
|
| 1250 |
+
],
|
| 1251 |
+
"page_idx": 7
|
| 1252 |
+
},
|
| 1253 |
+
{
|
| 1254 |
+
"type": "text",
|
| 1255 |
+
"text": "We regard a task requires models to understand text comprehensively if the performances of models decrease sharply when the text is truncated. TSort task meets this requirement since truncating any segment will lead to an incorrect answer.",
|
| 1256 |
+
"bbox": [
|
| 1257 |
+
507,
|
| 1258 |
+
807,
|
| 1259 |
+
884,
|
| 1260 |
+
887
|
| 1261 |
+
],
|
| 1262 |
+
"page_idx": 7
|
| 1263 |
+
},
|
| 1264 |
+
{
|
| 1265 |
+
"type": "text",
|
| 1266 |
+
"text": "To exhibit the BestAnswer requires more comprehensive text understanding than traditional QA",
|
| 1267 |
+
"bbox": [
|
| 1268 |
+
507,
|
| 1269 |
+
889,
|
| 1270 |
+
882,
|
| 1271 |
+
921
|
| 1272 |
+
],
|
| 1273 |
+
"page_idx": 7
|
| 1274 |
+
},
|
| 1275 |
+
{
|
| 1276 |
+
"type": "page_number",
|
| 1277 |
+
"text": "3719",
|
| 1278 |
+
"bbox": [
|
| 1279 |
+
480,
|
| 1280 |
+
927,
|
| 1281 |
+
519,
|
| 1282 |
+
940
|
| 1283 |
+
],
|
| 1284 |
+
"page_idx": 7
|
| 1285 |
+
},
|
| 1286 |
+
{
|
| 1287 |
+
"type": "text",
|
| 1288 |
+
"text": "and summarization tasks, we conduct an experiment on BestAnswer(16k version) and 2 classic long-context datasets, NarrativeQA(LongBench subset, QA task) and GovReport(LongBench subset, summarization task) respectively. The metric for NarrativeQA is F1 score and metric for GovReport is Rouge-L. We evaluate the performance of GPT-4-Turbo-1106 on all 3 datasets. Each test case is truncated into 2k, 4k and 8k version as the input. We also provide its full version for comparison.",
|
| 1289 |
+
"bbox": [
|
| 1290 |
+
112,
|
| 1291 |
+
84,
|
| 1292 |
+
492,
|
| 1293 |
+
247
|
| 1294 |
+
],
|
| 1295 |
+
"page_idx": 8
|
| 1296 |
+
},
|
| 1297 |
+
{
|
| 1298 |
+
"type": "table",
|
| 1299 |
+
"img_path": "images/c35218325e9bbdf771d91755ec59b0e9531772e55701612b65954132ffee40d6.jpg",
|
| 1300 |
+
"table_caption": [],
|
| 1301 |
+
"table_footnote": [],
|
| 1302 |
+
"table_body": "<table><tr><td>Benchmark</td><td>2k</td><td>4k</td><td>8k</td><td>Full</td><td>Avg #tokens</td></tr><tr><td>BestAnswer</td><td>11.0</td><td>20.0</td><td>31.5</td><td>44.0</td><td>15646</td></tr><tr><td>NarrativeQA</td><td>24.7</td><td>25.6</td><td>29.7</td><td>33.1</td><td>10276</td></tr><tr><td>GovReport</td><td>30.7</td><td>32.4</td><td>33.6</td><td>30.9</td><td>29872</td></tr></table>",
|
| 1303 |
+
"bbox": [
|
| 1304 |
+
122,
|
| 1305 |
+
265,
|
| 1306 |
+
485,
|
| 1307 |
+
323
|
| 1308 |
+
],
|
| 1309 |
+
"page_idx": 8
|
| 1310 |
+
},
|
| 1311 |
+
{
|
| 1312 |
+
"type": "text",
|
| 1313 |
+
"text": "Table 10: Results of GPT-4-Turbo on different long-context benchmarks.",
|
| 1314 |
+
"bbox": [
|
| 1315 |
+
112,
|
| 1316 |
+
332,
|
| 1317 |
+
489,
|
| 1318 |
+
362
|
| 1319 |
+
],
|
| 1320 |
+
"page_idx": 8
|
| 1321 |
+
},
|
| 1322 |
+
{
|
| 1323 |
+
"type": "text",
|
| 1324 |
+
"text": "From the table 10, the performance of GPT-4-Turbo on BestAnswer decreases more dramatically than NarrativeQA and GovReport when text is truncated. Notably, the performance on GovReport even increases when text is truncated into $4\\mathrm{k}$ and $8\\mathrm{k}$ . Therefore, our benchmarks require more full-text comprehension than traditional QA and summarization tasks.",
|
| 1325 |
+
"bbox": [
|
| 1326 |
+
112,
|
| 1327 |
+
386,
|
| 1328 |
+
489,
|
| 1329 |
+
514
|
| 1330 |
+
],
|
| 1331 |
+
"page_idx": 8
|
| 1332 |
+
},
|
| 1333 |
+
{
|
| 1334 |
+
"type": "text",
|
| 1335 |
+
"text": "5 Conclusion",
|
| 1336 |
+
"text_level": 1,
|
| 1337 |
+
"bbox": [
|
| 1338 |
+
112,
|
| 1339 |
+
542,
|
| 1340 |
+
247,
|
| 1341 |
+
558
|
| 1342 |
+
],
|
| 1343 |
+
"page_idx": 8
|
| 1344 |
+
},
|
| 1345 |
+
{
|
| 1346 |
+
"type": "text",
|
| 1347 |
+
"text": "In this paper, we introduce Ada-LEval, a length-adaptable dataset to assess long-context capability of LLMs. We conduct comprehensive experiments on multiple LLMs and find that all open-source models still lag significantly behind state-of-the-art proprietary models in terms of long context capability. When the input length scales to 4,000 tokens, most open-source models rapidly deteriorates to random guess level. In the meanwhile, the capability of proprietary models is also severely limited. When it comes to the ultra-long setting (32,000+ tokens), no proprietary model notably outperforms the random baseline. Ada-LEval is the first benchmark that evaluates LLMs under the ultra-long setting, and we hope that the limitations pointed out by this benchmarks can serve as valuable references for future developments of long-context LLMs.",
|
| 1348 |
+
"bbox": [
|
| 1349 |
+
112,
|
| 1350 |
+
579,
|
| 1351 |
+
489,
|
| 1352 |
+
852
|
| 1353 |
+
],
|
| 1354 |
+
"page_idx": 8
|
| 1355 |
+
},
|
| 1356 |
+
{
|
| 1357 |
+
"type": "text",
|
| 1358 |
+
"text": "Acknowledgement. This project is supported by the National Key R&D Program of China No.2022ZD0161600 and the Shanghai Postdoctoral Excellence Program (No.2023023).",
|
| 1359 |
+
"bbox": [
|
| 1360 |
+
112,
|
| 1361 |
+
857,
|
| 1362 |
+
489,
|
| 1363 |
+
921
|
| 1364 |
+
],
|
| 1365 |
+
"page_idx": 8
|
| 1366 |
+
},
|
| 1367 |
+
{
|
| 1368 |
+
"type": "text",
|
| 1369 |
+
"text": "6 Limitations",
|
| 1370 |
+
"text_level": 1,
|
| 1371 |
+
"bbox": [
|
| 1372 |
+
509,
|
| 1373 |
+
83,
|
| 1374 |
+
645,
|
| 1375 |
+
98
|
| 1376 |
+
],
|
| 1377 |
+
"page_idx": 8
|
| 1378 |
+
},
|
| 1379 |
+
{
|
| 1380 |
+
"type": "text",
|
| 1381 |
+
"text": "Ada-LEval is a challenging benchmark, requiring strong understanding and reasoning capabilities over long text. Due to the poor instruction following rate and copy instruction rate of open-source LLMs, Ada-LEval can hardly distinguish their long context capability through the accuracy metric.",
|
| 1382 |
+
"bbox": [
|
| 1383 |
+
507,
|
| 1384 |
+
111,
|
| 1385 |
+
884,
|
| 1386 |
+
206
|
| 1387 |
+
],
|
| 1388 |
+
"page_idx": 8
|
| 1389 |
+
},
|
| 1390 |
+
{
|
| 1391 |
+
"type": "text",
|
| 1392 |
+
"text": "Furthermore, as text length increases, the difficulty of Ada-LEval rises sharply under ultra-long-context settings. Even state-of-the-art proprietary models are not able to achieve an ideal performance, which further constrains its applicability to current LLMs.",
|
| 1393 |
+
"bbox": [
|
| 1394 |
+
507,
|
| 1395 |
+
208,
|
| 1396 |
+
885,
|
| 1397 |
+
303
|
| 1398 |
+
],
|
| 1399 |
+
"page_idx": 8
|
| 1400 |
+
},
|
| 1401 |
+
{
|
| 1402 |
+
"type": "text",
|
| 1403 |
+
"text": "References",
|
| 1404 |
+
"text_level": 1,
|
| 1405 |
+
"bbox": [
|
| 1406 |
+
510,
|
| 1407 |
+
332,
|
| 1408 |
+
610,
|
| 1409 |
+
349
|
| 1410 |
+
],
|
| 1411 |
+
"page_idx": 8
|
| 1412 |
+
},
|
| 1413 |
+
{
|
| 1414 |
+
"type": "list",
|
| 1415 |
+
"sub_type": "ref_text",
|
| 1416 |
+
"list_items": [
|
| 1417 |
+
"Chenxin An, Shansan Gong, Ming Zhong, Mukai Li, Jun Zhang, Lingpeng Kong, and Xipeng Qiu. 2023. L-eval: Instituting standardized evaluation for long context language models. arXiv preprint arXiv:2307.11088.",
|
| 1418 |
+
"Yushi Bai, Xin Lv, Jiajie Zhang, Hongchang Lyu, Jiankai Tang, Zhidian Huang, Zhengxiao Du, Xiao Liu, Aohan Zeng, Lei Hou, et al. 2023. Longbench: A bilingual, multitask benchmark for long context understanding. arXiv preprint arXiv:2308.14508.",
|
| 1419 |
+
"Zheng Cai, Maosong Cao, Haojiong Chen, Kai Chen, Keyu Chen, Xin Chen, Xun Chen, Zehui Chen, Zhi Chen, Pei Chu, Xiaoyi Dong, Haodong Duan, Qi Fan, Zhaoye Fei, Yang Gao, Jiaye Ge, Chenya Gu, Yuzhe Gu, Tao Gui, Aijia Guo, Qipeng Guo, Conghui He, Yingfan Hu, Ting Huang, Tao Jiang, Penglong Jiao, Zhenjiang Jin, Zhikai Lei, Jiaxing Li, Jingwen Li, Linyang Li, Shuaibin Li, Wei Li, Yining Li, Hongwei Liu, Jiangning Liu, Jiawei Hong, Kaiwen Liu, Kuikun Liu, Xiaoran Liu, Chengqi Lv, Hajun Lv, Kai Lv, Li Ma, Runyuan Ma, Zerun Ma, Wenchang Ning, Linke Ouyang, Jiantao Qiu, Yuan Qu, Fukai Shang, Yunfan Shao, Demin Song, Zifan Song, Zhihao Sui, Peng Sun, Yu Sun, Huanze Tang, Bin Wang, Guoteng Wang, Jiaqi Wang, Jiayu Wang, Rui Wang, Yudong Wang, Ziyi Wang, Xingjian Wei, Qizhen Weng, Fan Wu, Yingtong Xiong, Chao Xu, Ruiliang Xu, Hang Yan, Yirong Yan, Xiaogui Yang, Haochen Ye, Huaiyuan Ying, Jia Yu, Jing Yu, Yuhang Zang, Chuyu Zhang, Li Zhang, Pan Zhang, Peng Zhang, Ruijie Zhang, Shuo Zhang, Songyang Zhang, Wenjian Zhang, Wenwei Zhang, Xingcheng Zhang, Xinyue Zhang, Hui Zhao, Qian Zhao, Xiaomeng Zhao, Fengzhe Zhou, Zaida Zhou, Jingming Zhuo, Yicheng Zou, Xipeng Qiu, Yu Qiao and Dahua Lin. 2024. Internlm2 technical report.",
|
| 1420 |
+
"Howard Chen, Ramakanth Pasunuru, Jason Weston, and Asli Celikyilmaz. 2023a. Walking down the memory maze: Beyond context limit through interactive reading. arXiv preprint arXiv:2310.05029."
|
| 1421 |
+
],
|
| 1422 |
+
"bbox": [
|
| 1423 |
+
509,
|
| 1424 |
+
357,
|
| 1425 |
+
884,
|
| 1426 |
+
921
|
| 1427 |
+
],
|
| 1428 |
+
"page_idx": 8
|
| 1429 |
+
},
|
| 1430 |
+
{
|
| 1431 |
+
"type": "page_number",
|
| 1432 |
+
"text": "3720",
|
| 1433 |
+
"bbox": [
|
| 1434 |
+
480,
|
| 1435 |
+
927,
|
| 1436 |
+
519,
|
| 1437 |
+
940
|
| 1438 |
+
],
|
| 1439 |
+
"page_idx": 8
|
| 1440 |
+
},
|
| 1441 |
+
{
|
| 1442 |
+
"type": "list",
|
| 1443 |
+
"sub_type": "ref_text",
|
| 1444 |
+
"list_items": [
|
| 1445 |
+
"Shouyuan Chen, Sherman Wong, Liangjian Chen, and Yuandong Tian. 2023b. Extending context window of large language models via positional interpolation. arXiv preprint arXiv:2306.15595.",
|
| 1446 |
+
"Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. 2021. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168.",
|
| 1447 |
+
"OpenCompass Contributors. 2023. Opencompass: A universal evaluation platform for foundation models. https://github.com/open-compass/opencompass.",
|
| 1448 |
+
"Tri Dao, Dan Fu, Stefano Ermon, Atri Rudra, and Christopher Ré. 2022a. Flashattention: Fast and memory-efficient exact attention with io-awareness. Advances in Neural Information Processing Systems, 35:16344-16359.",
|
| 1449 |
+
"Tri Dao, Daniel Y Fu, Khaled K Saab, Armin W Thomas, Atri Rudra, and Christopher Ré. 2022b. Hungry hungry hippos: Towards language modeling with state space models. arXiv preprint arXiv:2212.14052.",
|
| 1450 |
+
"Pradeep Dasigi, Kyle Lo, Iz Beltagy, Arman Cohan, Noah A Smith, and Matt Gardner. 2021. A dataset of information-seeking questions and answers anchored in research papers. arXiv preprint arXiv:2105.03011.",
|
| 1451 |
+
"Tim Dettmers, Mike Lewis, Younes Belkada, and Luke Zettlemoyer. 2022. Llm. int8(): 8-bit matrix multiplication for transformers at scale. arXiv preprint arXiv:2208.07339.",
|
| 1452 |
+
"Jiayu Ding, Shuming Ma, Li Dong, Xingxing Zhang, Shaohan Huang, Wenhui Wang, and Furu Wei. 2023. Longnet: Scaling transformers to 1,000,000,000 tokens. arXiv preprint arXiv:2307.02486.",
|
| 1453 |
+
"Elias Frantar, Saleh Ashkboos, Torsten Hoefler, and Dan Alistarh. 2022. Gptq: Accurate post-training quantization for generative pre-trained transformers. arXiv preprint arXiv:2210.17323.",
|
| 1454 |
+
"Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, and Yinfei Yang. 2021. Longt5: Efficient text-to-text transformer for long sequences. arXiv preprint arXiv:2112.07916.",
|
| 1455 |
+
"Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2020. Measuring massive multitask language understanding. arXiv preprint arXiv:2009.03300.",
|
| 1456 |
+
"Luyang Huang, Shuyang Cao, Nikolaus Parulian, Heng Ji, and Lu Wang. 2021. Efficient attentions for long document summarization. arXiv preprint arXiv:2104.02112.",
|
| 1457 |
+
"Yuzhen Huang, Yuzhuo Bai, Zhihao Zhu, Junlei Zhang, Jinghan Zhang, Tangjun Su, Junteng Liu, Chuancheng Lv, Yikai Zhang, Jiayi Lei, et al. 2023."
|
| 1458 |
+
],
|
| 1459 |
+
"bbox": [
|
| 1460 |
+
115,
|
| 1461 |
+
85,
|
| 1462 |
+
485,
|
| 1463 |
+
919
|
| 1464 |
+
],
|
| 1465 |
+
"page_idx": 9
|
| 1466 |
+
},
|
| 1467 |
+
{
|
| 1468 |
+
"type": "list",
|
| 1469 |
+
"sub_type": "ref_text",
|
| 1470 |
+
"list_items": [
|
| 1471 |
+
"C-eval: A multi-level multi-discipline chinese evaluation suite for foundation models. arXiv preprint arXiv:2305.08322.",
|
| 1472 |
+
"Tomáš Kočisky, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, Gábor Melis, and Edward Grefenstette. 2018. The narrativeqa reading comprehension challenge. Transactions of the Association for Computational Linguistics, 6:317-328.",
|
| 1473 |
+
"Wojciech Krysciński, Nazneen Rajani, Divyansh Agarwal, Caiming Xiong, and Dragomir Radev. 2021. Booksum: A collection of datasets for long-form narrative summarization. arXiv preprint arXiv:2105.08209.",
|
| 1474 |
+
"Dacheng Li*, Rulin Shao*, Anze Xie, Ying Sheng, Lian-min Zheng, Joseph E. Gonzalez, Ion Stoica, Xuezhe Ma, , and Hao Zhang. 2023. How long can opensource llms truly promise on context length?",
|
| 1475 |
+
"Nelson F Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, and Percy Liang. 2023a. Lost in the middle: How language models use long contexts. arXiv preprint arXiv:2307.03172.",
|
| 1476 |
+
"Tianyang Liu, Canwen Xu, and Julian McAuley. 2023b. Repobench: Benchmarking repository-level code auto-completion systems. arXiv preprint arXiv:2306.03091.",
|
| 1477 |
+
"Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, et al. 2021. Webgpt: Browser-assisted question-answering with human feedback. arXiv preprint arXiv:2112.09332."
|
| 1478 |
+
],
|
| 1479 |
+
"bbox": [
|
| 1480 |
+
510,
|
| 1481 |
+
85,
|
| 1482 |
+
880,
|
| 1483 |
+
580
|
| 1484 |
+
],
|
| 1485 |
+
"page_idx": 9
|
| 1486 |
+
},
|
| 1487 |
+
{
|
| 1488 |
+
"type": "text",
|
| 1489 |
+
"text": "OpenAI. 2023. Gpt-4 technical report.",
|
| 1490 |
+
"bbox": [
|
| 1491 |
+
510,
|
| 1492 |
+
593,
|
| 1493 |
+
769,
|
| 1494 |
+
608
|
| 1495 |
+
],
|
| 1496 |
+
"page_idx": 9
|
| 1497 |
+
},
|
| 1498 |
+
{
|
| 1499 |
+
"type": "list",
|
| 1500 |
+
"sub_type": "ref_text",
|
| 1501 |
+
"list_items": [
|
| 1502 |
+
"Ofir Press, Noah A Smith, and Mike Lewis. 2021. Train short, test long: Attention with linear biases enables input length extrapolation. arXiv preprint arXiv:2108.12409.",
|
| 1503 |
+
"Uri Shaham, Elad Segal, Maor Ivgi, Avia Efrat, Ori Yoran, Adi Haviv, Ankit Gupta, Wenhan Xiong, Mor Geva, Jonathan Berant, et al. 2022. **Scrolls: Standardized comparison over long language sequences. arXiv preprint arXiv:2201.03533.**",
|
| 1504 |
+
"Jianlin Su. 2023. Rectified rotary position embeddings. https://github.com/bojone/erope.",
|
| 1505 |
+
"Jianlin Su, Yu Lu, Shengfeng Pan, Ahmed Murtadha, Bo Wen, and Yunfeng Liu. 2021. Roformer: Enhanced transformer with rotary position embedding. arXiv preprint arXiv:2104.09864.",
|
| 1506 |
+
"Simeng Sun, Yang Liu, Shuohang Wang, Chenguang Zhu, and Mohit Iyyer. 2023. Pearl: Prompting large language models to plan and execute actions over long documents. arXiv preprint arXiv:2305.14564."
|
| 1507 |
+
],
|
| 1508 |
+
"bbox": [
|
| 1509 |
+
510,
|
| 1510 |
+
620,
|
| 1511 |
+
880,
|
| 1512 |
+
919
|
| 1513 |
+
],
|
| 1514 |
+
"page_idx": 9
|
| 1515 |
+
},
|
| 1516 |
+
{
|
| 1517 |
+
"type": "page_number",
|
| 1518 |
+
"text": "3721",
|
| 1519 |
+
"bbox": [
|
| 1520 |
+
480,
|
| 1521 |
+
928,
|
| 1522 |
+
517,
|
| 1523 |
+
940
|
| 1524 |
+
],
|
| 1525 |
+
"page_idx": 9
|
| 1526 |
+
},
|
| 1527 |
+
{
|
| 1528 |
+
"type": "text",
|
| 1529 |
+
"text": "Yutao Sun, Li Dong, Barun Patra, Shuming Ma, Shaohan Huang, Alon Benhaim, Vishrav Chaudhary, Xia Song, and Furu Wei. 2022. A length-extrapolatable transformer. arXiv preprint arXiv:2212.10554.",
|
| 1530 |
+
"bbox": [
|
| 1531 |
+
115,
|
| 1532 |
+
85,
|
| 1533 |
+
487,
|
| 1534 |
+
139
|
| 1535 |
+
],
|
| 1536 |
+
"page_idx": 10
|
| 1537 |
+
},
|
| 1538 |
+
{
|
| 1539 |
+
"type": "text",
|
| 1540 |
+
"text": "Mirac Suzgun, Nathan Scales, Nathanael Scharli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc V Le, Ed H Chi, Denny Zhou, et al. 2022. Challenging big-bench tasks and whether chain-of-thought can solve them. arXiv preprint arXiv:2210.09261.",
|
| 1541 |
+
"bbox": [
|
| 1542 |
+
115,
|
| 1543 |
+
147,
|
| 1544 |
+
487,
|
| 1545 |
+
227
|
| 1546 |
+
],
|
| 1547 |
+
"page_idx": 10
|
| 1548 |
+
},
|
| 1549 |
+
{
|
| 1550 |
+
"type": "text",
|
| 1551 |
+
"text": "Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288.",
|
| 1552 |
+
"bbox": [
|
| 1553 |
+
115,
|
| 1554 |
+
237,
|
| 1555 |
+
487,
|
| 1556 |
+
315
|
| 1557 |
+
],
|
| 1558 |
+
"page_idx": 10
|
| 1559 |
+
},
|
| 1560 |
+
{
|
| 1561 |
+
"type": "text",
|
| 1562 |
+
"text": "Manzil Zaheer, Guru Guruganesh, Kumar Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, et al. 2020. Big bird: Transformers for longer sequences. Advances in neural information processing systems, 33:17283-17297.",
|
| 1563 |
+
"bbox": [
|
| 1564 |
+
115,
|
| 1565 |
+
325,
|
| 1566 |
+
487,
|
| 1567 |
+
404
|
| 1568 |
+
],
|
| 1569 |
+
"page_idx": 10
|
| 1570 |
+
},
|
| 1571 |
+
{
|
| 1572 |
+
"type": "text",
|
| 1573 |
+
"text": "Aohan Zeng, Xiao Liu, Zhengxiao Du, Zihan Wang, Hanyu Lai, Ming Ding, Zhuoyi Yang, Yifan Xu, Wendi Zheng, Xiao Xia, et al. 2022. Glm-130b: An open bilingual pre-trained model. arXiv preprint arXiv:2210.02414.",
|
| 1574 |
+
"bbox": [
|
| 1575 |
+
115,
|
| 1576 |
+
414,
|
| 1577 |
+
487,
|
| 1578 |
+
479
|
| 1579 |
+
],
|
| 1580 |
+
"page_idx": 10
|
| 1581 |
+
},
|
| 1582 |
+
{
|
| 1583 |
+
"type": "text",
|
| 1584 |
+
"text": "Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. 2023. Judging llm-as-a-judge with mt-bench and chatbot arena. arXiv preprint arXiv:2306.05685.",
|
| 1585 |
+
"bbox": [
|
| 1586 |
+
115,
|
| 1587 |
+
489,
|
| 1588 |
+
487,
|
| 1589 |
+
556
|
| 1590 |
+
],
|
| 1591 |
+
"page_idx": 10
|
| 1592 |
+
},
|
| 1593 |
+
{
|
| 1594 |
+
"type": "text",
|
| 1595 |
+
"text": "A Test Case Building Statistics",
|
| 1596 |
+
"text_level": 1,
|
| 1597 |
+
"bbox": [
|
| 1598 |
+
509,
|
| 1599 |
+
83,
|
| 1600 |
+
791,
|
| 1601 |
+
99
|
| 1602 |
+
],
|
| 1603 |
+
"page_idx": 10
|
| 1604 |
+
},
|
| 1605 |
+
{
|
| 1606 |
+
"type": "text",
|
| 1607 |
+
"text": "Recall that for each case length on Tsort task, we set the length upper limit for each text segment and the neighboring paragraphs before and after these contiguous chapters. We also set stride between beginning paragraphs. Table 11 demonstrates the detail statistics on the length upper limit and the stride.",
|
| 1608 |
+
"bbox": [
|
| 1609 |
+
507,
|
| 1610 |
+
120,
|
| 1611 |
+
880,
|
| 1612 |
+
231
|
| 1613 |
+
],
|
| 1614 |
+
"page_idx": 10
|
| 1615 |
+
},
|
| 1616 |
+
{
|
| 1617 |
+
"type": "text",
|
| 1618 |
+
"text": "On BestAnswer task, two questions are regarded as similar questions when they have $40\\%$ tags in common. Under ultra-long-context settings, both questions should contain at least 1 tag in common",
|
| 1619 |
+
"bbox": [
|
| 1620 |
+
507,
|
| 1621 |
+
237,
|
| 1622 |
+
880,
|
| 1623 |
+
302
|
| 1624 |
+
],
|
| 1625 |
+
"page_idx": 10
|
| 1626 |
+
},
|
| 1627 |
+
{
|
| 1628 |
+
"type": "text",
|
| 1629 |
+
"text": "B Evaluation Setups",
|
| 1630 |
+
"text_level": 1,
|
| 1631 |
+
"bbox": [
|
| 1632 |
+
509,
|
| 1633 |
+
329,
|
| 1634 |
+
702,
|
| 1635 |
+
346
|
| 1636 |
+
],
|
| 1637 |
+
"page_idx": 10
|
| 1638 |
+
},
|
| 1639 |
+
{
|
| 1640 |
+
"type": "text",
|
| 1641 |
+
"text": "Evaluation Hyperparameters. For open-source LLMs, we adopt their default hyperparameters during evaluation on Ada-LEval. For proprietary models including GPT-4-Turbo, GPT-3.5-Turbo-1106, we set the temperature to 0.",
|
| 1642 |
+
"bbox": [
|
| 1643 |
+
507,
|
| 1644 |
+
366,
|
| 1645 |
+
882,
|
| 1646 |
+
445
|
| 1647 |
+
],
|
| 1648 |
+
"page_idx": 10
|
| 1649 |
+
},
|
| 1650 |
+
{
|
| 1651 |
+
"type": "text",
|
| 1652 |
+
"text": "Computational Budget. Our experiments for open-source LLMs are conducted on NVIDIA A100 80GB GPU. The entire evaluation consumes around 800 GPU-hours.",
|
| 1653 |
+
"bbox": [
|
| 1654 |
+
507,
|
| 1655 |
+
451,
|
| 1656 |
+
880,
|
| 1657 |
+
514
|
| 1658 |
+
],
|
| 1659 |
+
"page_idx": 10
|
| 1660 |
+
},
|
| 1661 |
+
{
|
| 1662 |
+
"type": "text",
|
| 1663 |
+
"text": "Benchmark Instructions. We present instructions of both tasks within Ada-LEval. To ensure that models know what to do, we contain the sample input and output format that models need to follow in solving problems. The instructions are shown below.",
|
| 1664 |
+
"bbox": [
|
| 1665 |
+
507,
|
| 1666 |
+
520,
|
| 1667 |
+
880,
|
| 1668 |
+
614
|
| 1669 |
+
],
|
| 1670 |
+
"page_idx": 10
|
| 1671 |
+
},
|
| 1672 |
+
{
|
| 1673 |
+
"type": "text",
|
| 1674 |
+
"text": "Validity of 200-testcase subset. Our experiments on long-context settings adopt 200-testcase subset for proprietary models and 1000-testcase subset for open-source LLMs. To ensure that evaluation results on 200-testcase subset is valid, Table 12 and Table 13 display results on 200-testcase subset.",
|
| 1675 |
+
"bbox": [
|
| 1676 |
+
507,
|
| 1677 |
+
620,
|
| 1678 |
+
880,
|
| 1679 |
+
717
|
| 1680 |
+
],
|
| 1681 |
+
"page_idx": 10
|
| 1682 |
+
},
|
| 1683 |
+
{
|
| 1684 |
+
"type": "table",
|
| 1685 |
+
"img_path": "images/b1ff763b5fa939e04e7225df4eb312705bd259ff4bc2d20dae6288f0ee080556.jpg",
|
| 1686 |
+
"table_caption": [],
|
| 1687 |
+
"table_footnote": [],
|
| 1688 |
+
"table_body": "<table><tr><td>Setting</td><td>Before</td><td>Segments</td><td>After</td><td>Stride</td></tr><tr><td>2k</td><td>200</td><td>350</td><td>200</td><td>64</td></tr><tr><td>4k</td><td>300</td><td>800</td><td>300</td><td>64</td></tr><tr><td>8k</td><td>400</td><td>1750</td><td>400</td><td>64</td></tr><tr><td>16k</td><td>500</td><td>3700</td><td>500</td><td>64</td></tr><tr><td>32k</td><td>500</td><td>7700</td><td>500</td><td>128</td></tr><tr><td>64k</td><td>500</td><td>15700</td><td>500</td><td>128</td></tr><tr><td>128k</td><td>500</td><td>31700</td><td>500</td><td>128</td></tr></table>",
|
| 1689 |
+
"bbox": [
|
| 1690 |
+
514,
|
| 1691 |
+
737,
|
| 1692 |
+
877,
|
| 1693 |
+
859
|
| 1694 |
+
],
|
| 1695 |
+
"page_idx": 10
|
| 1696 |
+
},
|
| 1697 |
+
{
|
| 1698 |
+
"type": "text",
|
| 1699 |
+
"text": "Table 11: The length upper limit of text segments and stride between beginning paragraphs on TSort.",
|
| 1700 |
+
"bbox": [
|
| 1701 |
+
507,
|
| 1702 |
+
869,
|
| 1703 |
+
880,
|
| 1704 |
+
897
|
| 1705 |
+
],
|
| 1706 |
+
"page_idx": 10
|
| 1707 |
+
},
|
| 1708 |
+
{
|
| 1709 |
+
"type": "page_number",
|
| 1710 |
+
"text": "3722",
|
| 1711 |
+
"bbox": [
|
| 1712 |
+
480,
|
| 1713 |
+
927,
|
| 1714 |
+
519,
|
| 1715 |
+
940
|
| 1716 |
+
],
|
| 1717 |
+
"page_idx": 10
|
| 1718 |
+
},
|
| 1719 |
+
{
|
| 1720 |
+
"type": "table",
|
| 1721 |
+
"img_path": "images/6f41f0d62670564dc7da1b1fc78b4fa0d7dfc394aa4af78849d7ec74ba25d926.jpg",
|
| 1722 |
+
"table_caption": [],
|
| 1723 |
+
"table_footnote": [],
|
| 1724 |
+
"table_body": "<table><tr><td>TSort (200-testcase)</td><td>2k</td><td>4k</td><td>8k</td><td>16k</td></tr><tr><td>GPT-4-Turbo-0125</td><td>15.5</td><td>16.5</td><td>8.5</td><td>5.5</td></tr><tr><td>GPT-4-Turbo-1106</td><td>18.5</td><td>15.5</td><td>7.5</td><td>3.5</td></tr><tr><td>GPT-3.5-Turbo-1106</td><td>4.0</td><td>4.5</td><td>4.5</td><td>5.5</td></tr><tr><td>Claude-2</td><td>5.0</td><td>5.0</td><td>4.5</td><td>3.0</td></tr><tr><td>LongChat-7b-v1.5-32k</td><td>5.0</td><td>5.0</td><td>2.5</td><td>2.0</td></tr><tr><td>ChatGLM2-6B-32k</td><td>1.0</td><td>0.5</td><td>0.5</td><td>1.0</td></tr><tr><td>ChatGLM3-6B-32k</td><td>3.5</td><td>3.0</td><td>1.0</td><td>0.5</td></tr><tr><td>Vicuna-7b-v1.5-16k</td><td>5.0</td><td>1.5</td><td>1.0</td><td>2.5</td></tr><tr><td>Vicuna-13b-v1.5-16k</td><td>5.0</td><td>5.0</td><td>3.0</td><td>4.0</td></tr><tr><td>Random Guess</td><td>4.2</td><td>4.2</td><td>4.2</td><td>4.2</td></tr></table>",
|
| 1725 |
+
"bbox": [
|
| 1726 |
+
119,
|
| 1727 |
+
80,
|
| 1728 |
+
482,
|
| 1729 |
+
237
|
| 1730 |
+
],
|
| 1731 |
+
"page_idx": 11
|
| 1732 |
+
},
|
| 1733 |
+
{
|
| 1734 |
+
"type": "text",
|
| 1735 |
+
"text": "Table 12: TSort results under long-context settings(200-testcase subset).",
|
| 1736 |
+
"bbox": [
|
| 1737 |
+
112,
|
| 1738 |
+
247,
|
| 1739 |
+
489,
|
| 1740 |
+
275
|
| 1741 |
+
],
|
| 1742 |
+
"page_idx": 11
|
| 1743 |
+
},
|
| 1744 |
+
{
|
| 1745 |
+
"type": "text",
|
| 1746 |
+
"text": "TSort:",
|
| 1747 |
+
"text_level": 1,
|
| 1748 |
+
"bbox": [
|
| 1749 |
+
141,
|
| 1750 |
+
376,
|
| 1751 |
+
196,
|
| 1752 |
+
390
|
| 1753 |
+
],
|
| 1754 |
+
"page_idx": 11
|
| 1755 |
+
},
|
| 1756 |
+
{
|
| 1757 |
+
"type": "text",
|
| 1758 |
+
"text": "You are an AI assistant. Your job is to sort multiple book sections into the correct order.",
|
| 1759 |
+
"bbox": [
|
| 1760 |
+
141,
|
| 1761 |
+
393,
|
| 1762 |
+
462,
|
| 1763 |
+
438
|
| 1764 |
+
],
|
| 1765 |
+
"page_idx": 11
|
| 1766 |
+
},
|
| 1767 |
+
{
|
| 1768 |
+
"type": "text",
|
| 1769 |
+
"text": "Each time, you will be provided with 4 pieces of text.",
|
| 1770 |
+
"bbox": [
|
| 1771 |
+
141,
|
| 1772 |
+
441,
|
| 1773 |
+
460,
|
| 1774 |
+
470
|
| 1775 |
+
],
|
| 1776 |
+
"page_idx": 11
|
| 1777 |
+
},
|
| 1778 |
+
{
|
| 1779 |
+
"type": "text",
|
| 1780 |
+
"text": "These texts form a continuous part of a book, but are provided in random order.",
|
| 1781 |
+
"bbox": [
|
| 1782 |
+
141,
|
| 1783 |
+
473,
|
| 1784 |
+
460,
|
| 1785 |
+
504
|
| 1786 |
+
],
|
| 1787 |
+
"page_idx": 11
|
| 1788 |
+
},
|
| 1789 |
+
{
|
| 1790 |
+
"type": "text",
|
| 1791 |
+
"text": "You need to find the correct order and return the answer in a string.",
|
| 1792 |
+
"bbox": [
|
| 1793 |
+
141,
|
| 1794 |
+
505,
|
| 1795 |
+
460,
|
| 1796 |
+
536
|
| 1797 |
+
],
|
| 1798 |
+
"page_idx": 11
|
| 1799 |
+
},
|
| 1800 |
+
{
|
| 1801 |
+
"type": "text",
|
| 1802 |
+
"text": "For example, if you output [4, 1, 3, 2], that means the correct order is: Part 4 -> Part 1 -> Part 3 -> Part 2.",
|
| 1803 |
+
"bbox": [
|
| 1804 |
+
139,
|
| 1805 |
+
538,
|
| 1806 |
+
460,
|
| 1807 |
+
583
|
| 1808 |
+
],
|
| 1809 |
+
"page_idx": 11
|
| 1810 |
+
},
|
| 1811 |
+
{
|
| 1812 |
+
"type": "text",
|
| 1813 |
+
"text": "You will also be provided with the neighboring paragraphs before and after the 4 pieces of texts.",
|
| 1814 |
+
"bbox": [
|
| 1815 |
+
141,
|
| 1816 |
+
586,
|
| 1817 |
+
460,
|
| 1818 |
+
631
|
| 1819 |
+
],
|
| 1820 |
+
"page_idx": 11
|
| 1821 |
+
},
|
| 1822 |
+
{
|
| 1823 |
+
"type": "text",
|
| 1824 |
+
"text": "The case sample is shown below and you should give me the answer in the format exactly the same as the sample.",
|
| 1825 |
+
"bbox": [
|
| 1826 |
+
141,
|
| 1827 |
+
634,
|
| 1828 |
+
460,
|
| 1829 |
+
681
|
| 1830 |
+
],
|
| 1831 |
+
"page_idx": 11
|
| 1832 |
+
},
|
| 1833 |
+
{
|
| 1834 |
+
"type": "text",
|
| 1835 |
+
"text": "However, you should NOT focus on the content of sample answer.",
|
| 1836 |
+
"bbox": [
|
| 1837 |
+
141,
|
| 1838 |
+
683,
|
| 1839 |
+
460,
|
| 1840 |
+
713
|
| 1841 |
+
],
|
| 1842 |
+
"page_idx": 11
|
| 1843 |
+
},
|
| 1844 |
+
{
|
| 1845 |
+
"type": "text",
|
| 1846 |
+
"text": "Please do NOT output any extra content.",
|
| 1847 |
+
"bbox": [
|
| 1848 |
+
142,
|
| 1849 |
+
715,
|
| 1850 |
+
440,
|
| 1851 |
+
730
|
| 1852 |
+
],
|
| 1853 |
+
"page_idx": 11
|
| 1854 |
+
},
|
| 1855 |
+
{
|
| 1856 |
+
"type": "text",
|
| 1857 |
+
"text": "Sample Input (format only):",
|
| 1858 |
+
"bbox": [
|
| 1859 |
+
142,
|
| 1860 |
+
731,
|
| 1861 |
+
351,
|
| 1862 |
+
746
|
| 1863 |
+
],
|
| 1864 |
+
"page_idx": 11
|
| 1865 |
+
},
|
| 1866 |
+
{
|
| 1867 |
+
"type": "text",
|
| 1868 |
+
"text": "Before: XXX (Text before the continuous book part)",
|
| 1869 |
+
"bbox": [
|
| 1870 |
+
142,
|
| 1871 |
+
747,
|
| 1872 |
+
458,
|
| 1873 |
+
778
|
| 1874 |
+
],
|
| 1875 |
+
"page_idx": 11
|
| 1876 |
+
},
|
| 1877 |
+
{
|
| 1878 |
+
"type": "text",
|
| 1879 |
+
"text": "Part 1: XXX",
|
| 1880 |
+
"bbox": [
|
| 1881 |
+
142,
|
| 1882 |
+
778,
|
| 1883 |
+
238,
|
| 1884 |
+
791
|
| 1885 |
+
],
|
| 1886 |
+
"page_idx": 11
|
| 1887 |
+
},
|
| 1888 |
+
{
|
| 1889 |
+
"type": "text",
|
| 1890 |
+
"text": "Part 2: XXX",
|
| 1891 |
+
"bbox": [
|
| 1892 |
+
142,
|
| 1893 |
+
795,
|
| 1894 |
+
238,
|
| 1895 |
+
808
|
| 1896 |
+
],
|
| 1897 |
+
"page_idx": 11
|
| 1898 |
+
},
|
| 1899 |
+
{
|
| 1900 |
+
"type": "text",
|
| 1901 |
+
"text": "Part 3: XXX",
|
| 1902 |
+
"bbox": [
|
| 1903 |
+
142,
|
| 1904 |
+
812,
|
| 1905 |
+
238,
|
| 1906 |
+
825
|
| 1907 |
+
],
|
| 1908 |
+
"page_idx": 11
|
| 1909 |
+
},
|
| 1910 |
+
{
|
| 1911 |
+
"type": "text",
|
| 1912 |
+
"text": "Part 4: XXX",
|
| 1913 |
+
"bbox": [
|
| 1914 |
+
142,
|
| 1915 |
+
827,
|
| 1916 |
+
238,
|
| 1917 |
+
841
|
| 1918 |
+
],
|
| 1919 |
+
"page_idx": 11
|
| 1920 |
+
},
|
| 1921 |
+
{
|
| 1922 |
+
"type": "text",
|
| 1923 |
+
"text": "After: XXX (Text after the continuous book part)",
|
| 1924 |
+
"bbox": [
|
| 1925 |
+
141,
|
| 1926 |
+
843,
|
| 1927 |
+
460,
|
| 1928 |
+
873
|
| 1929 |
+
],
|
| 1930 |
+
"page_idx": 11
|
| 1931 |
+
},
|
| 1932 |
+
{
|
| 1933 |
+
"type": "text",
|
| 1934 |
+
"text": "Sample Output (format only):",
|
| 1935 |
+
"bbox": [
|
| 1936 |
+
142,
|
| 1937 |
+
876,
|
| 1938 |
+
361,
|
| 1939 |
+
891
|
| 1940 |
+
],
|
| 1941 |
+
"page_idx": 11
|
| 1942 |
+
},
|
| 1943 |
+
{
|
| 1944 |
+
"type": "text",
|
| 1945 |
+
"text": "Answer: [4, 1, 3, 2]",
|
| 1946 |
+
"bbox": [
|
| 1947 |
+
142,
|
| 1948 |
+
892,
|
| 1949 |
+
287,
|
| 1950 |
+
906
|
| 1951 |
+
],
|
| 1952 |
+
"page_idx": 11
|
| 1953 |
+
},
|
| 1954 |
+
{
|
| 1955 |
+
"type": "text",
|
| 1956 |
+
"text": "BestAnswer:",
|
| 1957 |
+
"text_level": 1,
|
| 1958 |
+
"bbox": [
|
| 1959 |
+
537,
|
| 1960 |
+
92,
|
| 1961 |
+
638,
|
| 1962 |
+
105
|
| 1963 |
+
],
|
| 1964 |
+
"page_idx": 11
|
| 1965 |
+
},
|
| 1966 |
+
{
|
| 1967 |
+
"type": "text",
|
| 1968 |
+
"text": "You are an AI assistant. Your job is to find out the most helpful answer to a given question.",
|
| 1969 |
+
"bbox": [
|
| 1970 |
+
536,
|
| 1971 |
+
109,
|
| 1972 |
+
855,
|
| 1973 |
+
153
|
| 1974 |
+
],
|
| 1975 |
+
"page_idx": 11
|
| 1976 |
+
},
|
| 1977 |
+
{
|
| 1978 |
+
"type": "text",
|
| 1979 |
+
"text": "Each time, you will be provided with a question and n answers to this question.",
|
| 1980 |
+
"bbox": [
|
| 1981 |
+
536,
|
| 1982 |
+
156,
|
| 1983 |
+
855,
|
| 1984 |
+
187
|
| 1985 |
+
],
|
| 1986 |
+
"page_idx": 11
|
| 1987 |
+
},
|
| 1988 |
+
{
|
| 1989 |
+
"type": "text",
|
| 1990 |
+
"text": "Each answer begins with an 'A' and a number (e.g. A4), which represents its designation.",
|
| 1991 |
+
"bbox": [
|
| 1992 |
+
536,
|
| 1993 |
+
189,
|
| 1994 |
+
855,
|
| 1995 |
+
234
|
| 1996 |
+
],
|
| 1997 |
+
"page_idx": 11
|
| 1998 |
+
},
|
| 1999 |
+
{
|
| 2000 |
+
"type": "text",
|
| 2001 |
+
"text": "You need to determine which answer is the most helpful one to the question.",
|
| 2002 |
+
"bbox": [
|
| 2003 |
+
536,
|
| 2004 |
+
237,
|
| 2005 |
+
853,
|
| 2006 |
+
268
|
| 2007 |
+
],
|
| 2008 |
+
"page_idx": 11
|
| 2009 |
+
},
|
| 2010 |
+
{
|
| 2011 |
+
"type": "text",
|
| 2012 |
+
"text": "The case sample is shown below and you should give me the answer in the format exactly the same as the sample.",
|
| 2013 |
+
"bbox": [
|
| 2014 |
+
536,
|
| 2015 |
+
269,
|
| 2016 |
+
853,
|
| 2017 |
+
316
|
| 2018 |
+
],
|
| 2019 |
+
"page_idx": 11
|
| 2020 |
+
},
|
| 2021 |
+
{
|
| 2022 |
+
"type": "text",
|
| 2023 |
+
"text": "However, you should NOT focus on the content of sample answer.",
|
| 2024 |
+
"bbox": [
|
| 2025 |
+
536,
|
| 2026 |
+
318,
|
| 2027 |
+
855,
|
| 2028 |
+
349
|
| 2029 |
+
],
|
| 2030 |
+
"page_idx": 11
|
| 2031 |
+
},
|
| 2032 |
+
{
|
| 2033 |
+
"type": "text",
|
| 2034 |
+
"text": "Sample Input (format only):",
|
| 2035 |
+
"bbox": [
|
| 2036 |
+
537,
|
| 2037 |
+
350,
|
| 2038 |
+
746,
|
| 2039 |
+
365
|
| 2040 |
+
],
|
| 2041 |
+
"page_idx": 11
|
| 2042 |
+
},
|
| 2043 |
+
{
|
| 2044 |
+
"type": "text",
|
| 2045 |
+
"text": "The question is given below.",
|
| 2046 |
+
"bbox": [
|
| 2047 |
+
537,
|
| 2048 |
+
367,
|
| 2049 |
+
747,
|
| 2050 |
+
381
|
| 2051 |
+
],
|
| 2052 |
+
"page_idx": 11
|
| 2053 |
+
},
|
| 2054 |
+
{
|
| 2055 |
+
"type": "text",
|
| 2056 |
+
"text": "XXX(The content of question)",
|
| 2057 |
+
"bbox": [
|
| 2058 |
+
537,
|
| 2059 |
+
382,
|
| 2060 |
+
764,
|
| 2061 |
+
397
|
| 2062 |
+
],
|
| 2063 |
+
"page_idx": 11
|
| 2064 |
+
},
|
| 2065 |
+
{
|
| 2066 |
+
"type": "text",
|
| 2067 |
+
"text": "Possible answers are given below.",
|
| 2068 |
+
"bbox": [
|
| 2069 |
+
537,
|
| 2070 |
+
399,
|
| 2071 |
+
788,
|
| 2072 |
+
413
|
| 2073 |
+
],
|
| 2074 |
+
"page_idx": 11
|
| 2075 |
+
},
|
| 2076 |
+
{
|
| 2077 |
+
"type": "text",
|
| 2078 |
+
"text": "A1:",
|
| 2079 |
+
"bbox": [
|
| 2080 |
+
537,
|
| 2081 |
+
414,
|
| 2082 |
+
566,
|
| 2083 |
+
426
|
| 2084 |
+
],
|
| 2085 |
+
"page_idx": 11
|
| 2086 |
+
},
|
| 2087 |
+
{
|
| 2088 |
+
"type": "text",
|
| 2089 |
+
"text": "XXX(The content of answer 1)",
|
| 2090 |
+
"bbox": [
|
| 2091 |
+
537,
|
| 2092 |
+
430,
|
| 2093 |
+
768,
|
| 2094 |
+
444
|
| 2095 |
+
],
|
| 2096 |
+
"page_idx": 11
|
| 2097 |
+
},
|
| 2098 |
+
{
|
| 2099 |
+
"type": "text",
|
| 2100 |
+
"text": "A2:",
|
| 2101 |
+
"bbox": [
|
| 2102 |
+
537,
|
| 2103 |
+
447,
|
| 2104 |
+
566,
|
| 2105 |
+
458
|
| 2106 |
+
],
|
| 2107 |
+
"page_idx": 11
|
| 2108 |
+
},
|
| 2109 |
+
{
|
| 2110 |
+
"type": "text",
|
| 2111 |
+
"text": "XXX(The content of answer 2)",
|
| 2112 |
+
"bbox": [
|
| 2113 |
+
537,
|
| 2114 |
+
463,
|
| 2115 |
+
768,
|
| 2116 |
+
476
|
| 2117 |
+
],
|
| 2118 |
+
"page_idx": 11
|
| 2119 |
+
},
|
| 2120 |
+
{
|
| 2121 |
+
"type": "text",
|
| 2122 |
+
"text": ".",
|
| 2123 |
+
"bbox": [
|
| 2124 |
+
537,
|
| 2125 |
+
485,
|
| 2126 |
+
547,
|
| 2127 |
+
492
|
| 2128 |
+
],
|
| 2129 |
+
"page_idx": 11
|
| 2130 |
+
},
|
| 2131 |
+
{
|
| 2132 |
+
"type": "text",
|
| 2133 |
+
"text": ".",
|
| 2134 |
+
"bbox": [
|
| 2135 |
+
537,
|
| 2136 |
+
499,
|
| 2137 |
+
546,
|
| 2138 |
+
505
|
| 2139 |
+
],
|
| 2140 |
+
"page_idx": 11
|
| 2141 |
+
},
|
| 2142 |
+
{
|
| 2143 |
+
"type": "text",
|
| 2144 |
+
"text": "An:",
|
| 2145 |
+
"bbox": [
|
| 2146 |
+
537,
|
| 2147 |
+
527,
|
| 2148 |
+
566,
|
| 2149 |
+
539
|
| 2150 |
+
],
|
| 2151 |
+
"page_idx": 11
|
| 2152 |
+
},
|
| 2153 |
+
{
|
| 2154 |
+
"type": "text",
|
| 2155 |
+
"text": "XXX(The content of answer n)",
|
| 2156 |
+
"bbox": [
|
| 2157 |
+
537,
|
| 2158 |
+
543,
|
| 2159 |
+
768,
|
| 2160 |
+
557
|
| 2161 |
+
],
|
| 2162 |
+
"page_idx": 11
|
| 2163 |
+
},
|
| 2164 |
+
{
|
| 2165 |
+
"type": "text",
|
| 2166 |
+
"text": "Now the answers are over, please decide which answer is the most helpful one to the question. You must give me only the designation of the MOST helpful answer.",
|
| 2167 |
+
"bbox": [
|
| 2168 |
+
536,
|
| 2169 |
+
558,
|
| 2170 |
+
855,
|
| 2171 |
+
621
|
| 2172 |
+
],
|
| 2173 |
+
"page_idx": 11
|
| 2174 |
+
},
|
| 2175 |
+
{
|
| 2176 |
+
"type": "text",
|
| 2177 |
+
"text": "Sample Output (format only):",
|
| 2178 |
+
"bbox": [
|
| 2179 |
+
537,
|
| 2180 |
+
624,
|
| 2181 |
+
757,
|
| 2182 |
+
638
|
| 2183 |
+
],
|
| 2184 |
+
"page_idx": 11
|
| 2185 |
+
},
|
| 2186 |
+
{
|
| 2187 |
+
"type": "text",
|
| 2188 |
+
"text": "Answer: The designation of the most helpful answer.(e.g. A4 means answer 4 is the most helpful answer)",
|
| 2189 |
+
"bbox": [
|
| 2190 |
+
536,
|
| 2191 |
+
639,
|
| 2192 |
+
855,
|
| 2193 |
+
687
|
| 2194 |
+
],
|
| 2195 |
+
"page_idx": 11
|
| 2196 |
+
},
|
| 2197 |
+
{
|
| 2198 |
+
"type": "page_number",
|
| 2199 |
+
"text": "3723",
|
| 2200 |
+
"bbox": [
|
| 2201 |
+
480,
|
| 2202 |
+
927,
|
| 2203 |
+
519,
|
| 2204 |
+
940
|
| 2205 |
+
],
|
| 2206 |
+
"page_idx": 11
|
| 2207 |
+
},
|
| 2208 |
+
{
|
| 2209 |
+
"type": "table",
|
| 2210 |
+
"img_path": "images/15dc18ea849a5bac071fd534bf7c0b00339d97a47959a42b6336b53ab3434ca4.jpg",
|
| 2211 |
+
"table_caption": [],
|
| 2212 |
+
"table_footnote": [],
|
| 2213 |
+
"table_body": "<table><tr><td>BestAnswer (200-testcase)</td><td>1k</td><td>2k</td><td>4k</td><td>6k</td><td>8k</td><td>12k</td><td>16k</td></tr><tr><td>GPT-4-Turbo-0125</td><td>73.5</td><td>73.5</td><td>65.5</td><td>63.0</td><td>56.5</td><td>52.0</td><td>44.5</td></tr><tr><td>GPT-4-Turbo-1106</td><td>74.0</td><td>73.5</td><td>67.5</td><td>59.5</td><td>53.5</td><td>49.5</td><td>44.0</td></tr><tr><td>GPT-3.5-turbo-1106</td><td>61.5</td><td>48.5</td><td>41.5</td><td>29.5</td><td>17.0</td><td>2.5</td><td>2.5</td></tr><tr><td>Claude-2</td><td>65.0</td><td>43.5</td><td>23.5</td><td>15.0</td><td>17.0</td><td>12.0</td><td>11.0</td></tr><tr><td>LongChat-7b-v1.5-32k</td><td>32.5</td><td>8.0</td><td>3.5</td><td>3.0</td><td>2.5</td><td>1.5</td><td>1.0</td></tr><tr><td>ChatGLM2-6B-32k</td><td>36.0</td><td>10.5</td><td>3.0</td><td>0.5</td><td>1.5</td><td>0.0</td><td>0.0</td></tr><tr><td>ChatGLM3-6B-32k</td><td>37.0</td><td>15.5</td><td>5.5</td><td>4.0</td><td>5.5</td><td>0.5</td><td>0.5</td></tr><tr><td>Vicuna-7b-v1.5-16k</td><td>32.5</td><td>8.5</td><td>2.5</td><td>3.5</td><td>3.0</td><td>0.5</td><td>2.0</td></tr><tr><td>Vicuna-13b-v1.5-16k</td><td>52.0</td><td>29.0</td><td>11.0</td><td>4.0</td><td>1.5</td><td>1.0</td><td>1.5</td></tr><tr><td>Random Guess</td><td>26.7</td><td>10.1</td><td>4.5</td><td>3.0</td><td>2.3</td><td>1.4</td><td>1.1</td></tr></table>",
|
| 2214 |
+
"bbox": [
|
| 2215 |
+
201,
|
| 2216 |
+
401,
|
| 2217 |
+
801,
|
| 2218 |
+
571
|
| 2219 |
+
],
|
| 2220 |
+
"page_idx": 12
|
| 2221 |
+
},
|
| 2222 |
+
{
|
| 2223 |
+
"type": "text",
|
| 2224 |
+
"text": "Table 13: BestAnswer results under long-context settings(200-testcase subset).",
|
| 2225 |
+
"bbox": [
|
| 2226 |
+
228,
|
| 2227 |
+
581,
|
| 2228 |
+
766,
|
| 2229 |
+
595
|
| 2230 |
+
],
|
| 2231 |
+
"page_idx": 12
|
| 2232 |
+
},
|
| 2233 |
+
{
|
| 2234 |
+
"type": "page_number",
|
| 2235 |
+
"text": "3724",
|
| 2236 |
+
"bbox": [
|
| 2237 |
+
480,
|
| 2238 |
+
928,
|
| 2239 |
+
519,
|
| 2240 |
+
940
|
| 2241 |
+
],
|
| 2242 |
+
"page_idx": 12
|
| 2243 |
+
}
|
| 2244 |
+
]
|
2024/Ada-LEval_ Evaluating long-context LLMs with length-adaptable benchmarks/8df20923-51bb-4d14-9407-d091f93cc639_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/Ada-LEval_ Evaluating long-context LLMs with length-adaptable benchmarks/8df20923-51bb-4d14-9407-d091f93cc639_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:33f02b21c75239f7550d627fe7d074c4c812f4fc234f0968199a50335ccd8a2e
|
| 3 |
+
size 609067
|
2024/Ada-LEval_ Evaluating long-context LLMs with length-adaptable benchmarks/full.md
ADDED
|
@@ -0,0 +1,377 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Ada-LEval: Evaluating long-context LLMs with length-adaptable benchmarks
|
| 2 |
+
|
| 3 |
+
Chonghua Wang $^{2*}$ , Haodong Duan $^{1\dagger}$ , Songyang Zhang $^{1}$ , Dahua Lin $^{1}$ , Kai Chen $^{1‡}$
|
| 4 |
+
|
| 5 |
+
$^{1}$ Shanghai AI Laboratory
|
| 6 |
+
|
| 7 |
+
$^{2}$ Shanghai Jiao Tong University
|
| 8 |
+
|
| 9 |
+
philipwang@sjtu.edu.cn
|
| 10 |
+
|
| 11 |
+
duanhaodong@pjlab.org.cn
|
| 12 |
+
|
| 13 |
+
# Abstract
|
| 14 |
+
|
| 15 |
+
Recently, the large language model (LLM) community has shown increasing interest in enhancing LLMs' capability to handle extremely long documents. As various long-text techniques and model architectures emerge, the precise and detailed evaluation of models' long-text capabilities has become increasingly important. Existing long-text evaluation benchmarks, such as L-Eval and LongBench, construct long-text test sets based on open-source datasets, focusing mainly on QA and summarization tasks. These datasets include test samples of varying lengths (from 2k to $32\mathrm{k}+$ ) entangled together, making it challenging to assess model capabilities across different length ranges. Moreover, they do not cover the ultralong settings ( $100\mathrm{k}+$ tokens) that the latest LLMs claim to achieve. In this paper, we introduce Ada-LEval, a length-adaptable benchmark for evaluating the long-context understanding of LLMs. Ada-LEval includes two challenging subsets, TSort and BestAnswer, which enable a more reliable evaluation of LLMs' long context capabilities. These benchmarks support intricate manipulation of the length of test cases, and can easily produce text samples up to 128k tokens. We evaluate 4 state-of-the-art closed-source API models and 6 open-source models with Ada-LEval. The evaluation results demonstrate the limitations of current LLMs, especially in ultra-long-context settings. Our code is available at https://github.com/open-compass/Ada-LEval.
|
| 16 |
+
|
| 17 |
+
# 1 Introduction
|
| 18 |
+
|
| 19 |
+
Large Language Models (LLMs), typically based on large transformers trained on vast corpus, have shown exceptional abilities in memorization, comprehension, and reasoning (OpenAI, 2023; Touvron et al., 2023; Zheng et al., 2023). A critical factor
|
| 20 |
+
|
| 21 |
+

|
| 22 |
+
BestAnswer Task
|
| 23 |
+
|
| 24 |
+

|
| 25 |
+
Figure 1: The demonstration of two tasks: TSort and BestAnswer introduced in Ada-LEval. Understanding and reasoning over the full text are required to solve these two tasks.
|
| 26 |
+
|
| 27 |
+
that affects LLM performance is the 'context window' - the number of tokens an LLM can process simultaneously. This window's size is pivotal in handling lengthy texts. Since the debut of ChatGPT with a 2,000-token window in November 2022, significant efforts have been made in this domain, including more efficient attention mechanisms (Dao et al., 2022a; Zaheer et al., 2020; Ding et al., 2023), scalable position embeddings (Su et al., 2021; Sun et al., 2022), and quantization techniques (Frantar et al., 2022; Dettmers et al., 2022). As of December 2023, several LLMs claim to achieve context windows up to hundreds of thousands of tokens. This includes both proprietary models like GPT-4 Turbo
|
| 28 |
+
|
| 29 |
+
(128,000 tokens), Claude-2.1 (200,000 tokens), and Moonshot AI (200,000 Chinese characters), and open-source models such as ChatGLM-32k (Zeng et al., 2022) and LongChat-32k (Li* et al., 2023). This expansion significantly enhances the potential for processing extensive documents. Nevertheless, the effectiveness of these long-context LLMs in managing long texts remains an area ripe for exploration and assessment.
|
| 30 |
+
|
| 31 |
+
Alongside the evolution of LLMs, a wide range of benchmarks have emerged for capability assessment (Hendrycks et al., 2020; Suzgun et al., 2022; Cobbe et al., 2021; Huang et al., 2023). Most of those benchmarks utilize short questions or instructions, making them unsuitable for evaluating LLMs' long-context capabilities. While a few benchmarks do focus on assessing specific long-context abilities like summarization, question-answering (QA), and continue writing (Huang et al., 2021; Liu et al., 2023b; Dasigi et al., 2021), comprehensive long-document evaluations have been limited. Recent benchmarks such as SCROLLS (Shaham et al., 2022), L-Eval (An et al., 2023) and LongBench (Bai et al., 2023) have started to address this gap by including a suite of long-document tasks, aiming for a more holistic assessment of LLMs' long-context understanding.
|
| 32 |
+
|
| 33 |
+
Despite these advancements, three significant limitations persist in existing benchmarks: Firstly, the ultra-long setting (32,000 tokens or longer) is scarcely represented, limiting insights into LLM performance in extreme context lengths. Secondly, the integration of test samples of varying lengths within these benchmarks complicates the evaluation of LLMs across different length ranges. Lastly, the focus on traditional tasks such as question-answering and summarization often does not necessitate comprehensive content understanding by the LLMs, as many questions in these tasks do not require full-text comprehension. This highlights the need for more targeted benchmarks that can rigorously evaluate the deep and complete understanding of long-form content by LLMs.
|
| 34 |
+
|
| 35 |
+
To this end, we introduce Ada-LEval, a pioneering benchmark to assess the long-context capabilities with length-adaptable questions. Ada-LEval comprises two challenging tasks: TSort, which involves arranging text segments in the correct order, and BestAnswer, which requires choosing the best answer of a question among multiple candidates. Both tasks feature the following advantages: 1. Controllable Test Cases: The length of each test
|
| 36 |
+
|
| 37 |
+
case can be finely tuned - by adjusting the number and length of text segments in TSort and altering the number of distractor options in BestAnswer. 2. Necessity for Full-Text Comprehension: Successful completion of both tasks mandates complete reading and understanding of the provided text. 3. Precise Accuracy Measurement: The design of these tasks allows for unambiguous accuracy calculation. TSort has a definitive 'correct' order, whereas in BestAnswer, the annotated responses by the questioner serve as definitive answers.
|
| 38 |
+
|
| 39 |
+
Our experiments on these tasks reveal critical insights. We observe a noteworthy decline in the performance of existing LLMs as text length increases, particularly in ultra-long scenarios. Furthermore, our ablation study uncovers several shortcomings in current LLMs, including limited instruction following over extended texts and pronounced input order bias. Additionally, we explore various scalable position embedding techniques aimed at enlarging the context window of LLMs. Our findings indicate that models equipped with those techniques show improved performance over the standard models, and the performance is comparable to their counterparts trained on longer contexts.
|
| 40 |
+
|
| 41 |
+
# 2 Related Work
|
| 42 |
+
|
| 43 |
+
# 2.1 Long-Context Techniques
|
| 44 |
+
|
| 45 |
+
To address the complexities introduced by the increased text length in language models, researchers have developed a range of innovative techniques. These methodologies primarily focus on the following key areas: more efficient attention mechanisms, divide-and-conquer paradigms, and scalable position embedding techniques.
|
| 46 |
+
|
| 47 |
+
Efficient Attention Mechanisms. Notable advancements in attention mechanisms within Transformers have been achieved by several studies (Zaheer et al., 2020; Guo et al., 2021; Dao et al., 2022b; Ding et al., 2023). A key development in this area is Flash Attention (Dao et al., 2022a), which streamlines the attention process by circumventing the need to read and write the attention matrix across different memory tiers. This approach results in faster processing and reduced memory usage compared to traditional attention methods. In LongNet, Ding et al. (2023) introduces Dilated Attention, which reduces the computation complexity of attention to nearly linear and scales to 1 billion tokens. However, Liu et al. (2023a) identified a limitation where these mechanisms tend to falter with the
|
| 48 |
+
|
| 49 |
+
middle portions of long texts.
|
| 50 |
+
|
| 51 |
+
Divide-and-Conquer. In exploring alternatives to conventional long-text modeling, several studies have adopted a segmented approach to manage extensive content. WebGPT (Nakano et al., 2021) addresses long-form QA by interacting with a text-based web-browsing environment. PEARL (Sun et al., 2023) introduces a framework that prompts LLMs to generate and execute plans for tackling complex long-text reasoning tasks. Chen et al. (2023a) constructs a memory tree with the summarization of document segments and navigates on the memory tree to answer the original question.
|
| 52 |
+
|
| 53 |
+
Scalable Position Embeddings. Scalable position embeddings have been instrumental in extending the context window of LLMs. RoPE (Su et al., 2021) utilizes a rotation matrix to enhance positional information, integrating explicit relative position dependencies into the self-attention mechanism. ALiBi (Press et al., 2021) does not add position embeddings to word embeddings, instead applying a linearly decreasing penalty to attention scores based on key-query distances. Position Interpolation (Chen et al., 2023b) adopts a different strategy by linearly scaling down input position indices to align with preset context window sizes, requiring few fine-tuning steps. NTK-aware Scaled RoPE<sup>1</sup> and ReRoPE (Su, 2023) further combine the benefits of position interpolation and length extrapolation methods without any fine-tuning steps.
|
| 54 |
+
|
| 55 |
+
# 2.2 Long-Context Language Models
|
| 56 |
+
|
| 57 |
+
Building on advancements in long-context techniques, several long-context LLMs are developed and released. Llama 2 (Touvron et al., 2023) integrates RoPE to expand its context window to 4,000 tokens. Vicuna-v1.5 (Zheng et al., 2023) further extends this capability by fine-tuning Llama 2 on high-quality, extensive conversations, successfully increasing the context window to 16,000 tokens. Longchat (Li* et al., 2023) models condense RoPE to utilize model weights learned in the pretraining stage. ChatGLM2-32k (Zeng et al., 2022) is trained on a 32,000-token context length using position interpolation, showcasing the scalability of this technique.
|
| 58 |
+
|
| 59 |
+
The domain of proprietary language models has seen even more significant advancements in long-context modeling, stepped into the ultra-long con
|
| 60 |
+
|
| 61 |
+
text field. GPT-4-Turbo (OpenAI, 2023) notably extends its context window to an impressive 128,000 tokens. In a similar vein, Claude-2 and Claude-2.1 have achieved context lengths of 100,000 and 200,000 tokens respectively. This expansion allows them to process vast quantities of information, such as hundreds of pages of technical documentation or entire books. Kimi Chat, developed by Moonshot.ai, claims to handle up to 200,000 Chinese characters. However, no existing dataset can evaluate the capability in tackling such long texts.
|
| 62 |
+
|
| 63 |
+
# 2.3 Long-Context Benchmarks
|
| 64 |
+
|
| 65 |
+
Efforts to evaluate the long-context capabilities of language models have been intensifying, with a focus primarily on traditional question-answering (QA) and summarization tasks. NarrativeQA (Kočisky et al., 2018) offers a question-answering dataset built on the entire books from Project Gutenberg and movie transcripts. GovReport (Huang et al., 2021) provides a dataset comprising national policy issues, each accompanied by an expert-written summary, thus testing models' ability to distill complex, lengthy documents into concise summaries. Based on existing long-context benchmarks, SCROLLS(Shaham et al., 2022) introduces a suite of datasets that requires models to process and reason over long contexts.
|
| 66 |
+
|
| 67 |
+
Concurrently, L-Eval (An et al., 2023) and Long-Bench (Bai et al., 2023) are designed for comprehensive evaluation of long-context capabilities of LLMs. L-Eval offers a collection of long documents across different domains and provides both close-ended and open-ended tasks. LongBench is a bilingual long context benchmark covering six task categories. Most tasks in these benchmarks are traditional QA and summarization with fixed document, questions and answers. They are inflexible on text length (up to $\sim 32,000$ tokens), which fall short of adapting to ultra-long context evaluation. Additionally, LongBench uses mostly open-ended tasks with traditional F1 and ROUGE metric that may not align well with human judgments. In contrast, our benchmarks support length-adaptable evaluation, provide sufficient cases and evaluate models using accuracy metrics, avoiding inconsistencies with human evaluation.
|
| 68 |
+
|
| 69 |
+
# 3 Ada-LEval
|
| 70 |
+
|
| 71 |
+
In this section, we outline the construction process of Ada-LEval, detailing both the collection
|
| 72 |
+
|
| 73 |
+
methodology of our source data and the building procedure of our test cases. Table 1 demonstrates the data statistics of Ada-LEval.
|
| 74 |
+
|
| 75 |
+
<table><tr><td colspan="4">TSort</td></tr><tr><td>Setting</td><td>Total #Cases Built</td><td>Max #Tokens</td><td>Avg #Tokens</td></tr><tr><td>2k</td><td>5123</td><td>2000</td><td>1816</td></tr><tr><td>4k</td><td>5451</td><td>4000</td><td>3724</td></tr><tr><td>8k</td><td>5324</td><td>8000</td><td>7663</td></tr><tr><td>16k</td><td>4957</td><td>16000</td><td>15662</td></tr><tr><td>32k</td><td>2206</td><td>32000</td><td>31226</td></tr><tr><td>64k</td><td>1658</td><td>64000</td><td>62407</td></tr><tr><td>128k</td><td>782</td><td>127800</td><td>121488</td></tr></table>
|
| 76 |
+
|
| 77 |
+
<table><tr><td colspan="4">BestAnswer</td></tr><tr><td>Setting</td><td>Total #Cases Built</td><td>Max #Tokens</td><td>Avg #Tokens</td></tr><tr><td>1k</td><td>7526</td><td>1128</td><td>955</td></tr><tr><td>2k</td><td>7526</td><td>2154</td><td>1983</td></tr><tr><td>4k</td><td>7526</td><td>4215</td><td>3994</td></tr><tr><td>6k</td><td>7526</td><td>6268</td><td>6012</td></tr><tr><td>8k</td><td>7526</td><td>7790</td><td>7518</td></tr><tr><td>12k</td><td>7526</td><td>12389</td><td>12091</td></tr><tr><td>16k</td><td>7526</td><td>15964</td><td>15646</td></tr><tr><td>32k</td><td>200</td><td>32974</td><td>32329</td></tr><tr><td>64k</td><td>200</td><td>64216</td><td>63274</td></tr><tr><td>128k</td><td>200</td><td>127059</td><td>126098</td></tr></table>
|
| 78 |
+
|
| 79 |
+
Table 1: The data statistics of TSort and BestAnswer. We adopt the GPT-4 tokenizer CL100K to calculate token numbers. We use a subset of all built cases for evaluation.
|
| 80 |
+
|
| 81 |
+
# 3.1 Task Definition
|
| 82 |
+
|
| 83 |
+
TSort. TSort provides LLMs with N shuffled text segments, extracted from contiguous chapters of a long book. The task for models is to sort these segments into their original sequence. A response is regarded accurate only if it precisely reinstates the segments' initial order. To simplify the challenge and minimize possible confusion, we supply LLMs with adjacent paragraphs from before and after the specified chapters to serve as contextual hints.
|
| 84 |
+
|
| 85 |
+
BestAnswer. Each test case in BestAnswer contains one question and a large amount of possible answers to this question. We consider the answer designated by the original inquirer as the most helpful answer, while LLMs are required to identify this optimal answer among all possible candidates.
|
| 86 |
+
|
| 87 |
+
# 3.2 Source Data Collection
|
| 88 |
+
|
| 89 |
+
TSort. For TSort, we sourced our initial data from Booksum (Krysciński et al., 2021), a text summarization dataset derived from the Project Gutenberg, a public book repository consisting of over 60,000 free eBooks spanning various literary genres including novels, plays, short stories, and more. Genres like epistolary literature and poetry are excluded in
|
| 90 |
+
|
| 91 |
+
the construction of TSort benchmark due to their non-sequential nature. To prevent LLMs from exploiting superficial cues, we meticulously remove identifiers such as chapter numbers and annotations from the content.
|
| 92 |
+
|
| 93 |
+
BestAnswer. The BestAnswer benchmark is constructed using threads from Stack Overflow, a platform renowned for its extensive range of programming-related questions and answers. Stack Overflow questions are categorized by multiple tags, indicating the thematic similarity of questions within each tag. To ensure the quality and diversity of our benchmark, we choose 23 different tags, including javascript, python, C++, etc., and collect top 2500 questions from each tag based on popularity.
|
| 94 |
+
|
| 95 |
+
# 3.3 Test Case Building
|
| 96 |
+
|
| 97 |
+
For both tasks, we construct test cases according to their token length (measured by GPT-4 tokenizer). We regard token lengths between 1,000 to 16,000 as long-context settings and text lengths exceed 16,000 as ultra-long-context settings.
|
| 98 |
+
|
| 99 |
+
Under long-context settings, TSort cases span test cases with 2k, 4k, 8k, and 16k tokens. For each length, we fix the segment number $N = 4$ and the length upper limit for each text segment and adjacent paragraphs before and after these contiguous chapters. We ensure that each text segment contains complete paragraphs thus no paragraph is sliced in the middle. To build test cases with different contents, we set stride between beginning paragraphs of test cases during construction. After preponding the instructions, we further filter test cases that exceed the token upper bound.
|
| 100 |
+
|
| 101 |
+
For BestAnswer, we generate test cases with 1k, 2k, 4k, 6k, 8k, 12k, and 16k tokens under long-context settings. Test cases contain the distractor answers under corresponding question and adaptable number of distractor answers from other similar questions under each length setting. To make evaluation results directly comparable across different length settings in long context scenarios, we ensure that the questions within the BestAnswer benchmark remain unchanged, regardless of the case length. In BestAnswer, we define the most helpful answer as the answer explicitly accepted by the inquirer, and adopt it as the 'groundtruth answer'. For integrity reasons, we exclude all
|
| 102 |
+
|
| 103 |
+
questions where the corresponding most helpful answer is not text-only. When choosing the distractors, we only consider answers that are provided prior to the accepted answer under corresponding question. Besides, we incorporate answers from other questions with similar tags to the original question to serve as distractor answers.
|
| 104 |
+
|
| 105 |
+
Under ultra-long-context settings, we build test cases with 32k, 64k, and 128k tokens for both tasks. The construction paradigm is similar to the long-context setting. For BestAnswer, since the number of similar questions and the corresponding answers are limited, we relax tag similarity constraints and allow answers of questions with less similar tags to serve as the distractor answers.
|
| 106 |
+
|
| 107 |
+
# 4 Evaluation Results
|
| 108 |
+
|
| 109 |
+
# 4.1 Experiment Setup
|
| 110 |
+
|
| 111 |
+
We evaluate the following LLMs under long-context settings: 4 proprietary models: (1) GPT-4-Turbo-0125, (2) GPT-4-Turbo-1106 (3) GPT-3.5-Turbo-1106, (4) Claude-2; and 6 open-source models: (5) LongChat-7b-v1.5-32k(Zheng et al., 2023), (6) ChatGLM2-6B-32k(Zeng et al., 2022), (7) ChatGLM3-6B-32k(Zeng et al., 2022), (8) Vicuna7b-v1.5-16k(Zheng et al., 2023), (9) Vicuna-13b-v1.5-16k(Zheng et al., 2023), (10) InternLM2-7b(Cai et al., 2024). Due to the inferior performance of open-source LLMs under long-context settings, only models with good performance (GPT4-Turbo, Claude-2, etc.) are evaluated under ultra-long-context settings.
|
| 112 |
+
|
| 113 |
+
For open-source LLMs, we sample a 1000-testcase subset for evaluation under each length setting. Due to the costly API of state-of-the-art proprietary models (GPT-4-Turbo, Claude-2, etc.), we adopt 200-testcase subset (sampled from the 1000-testcase set) for evaluation under long-context settings, and a 50-testcase subset for evaluation under ultra-long-context settings. All experiments are conducted using the open-source LLM evaluation platform OpenCompass (Contributors, 2023). We adopt the zero-shot setting for all evaluation, and provide a 'random guess' baseline. We also measure the instruction following rate and the copy instruction rate on both tasks.
|
| 114 |
+
|
| 115 |
+
4.2 Long-Context Evaluation Results
|
| 116 |
+
|
| 117 |
+
<table><tr><td>TSort</td><td>2k</td><td>4k</td><td>8k</td><td>16k</td></tr><tr><td>GPT-4-Turbo-0125</td><td>15.5</td><td>16.5</td><td>8.5</td><td>5.5</td></tr><tr><td>GPT-4-Turbo-1106</td><td>18.5</td><td>15.5</td><td>7.5</td><td>3.5</td></tr><tr><td>GPT-3.5-Turbo-1106</td><td>4.0</td><td>4.5</td><td>4.5</td><td>5.5</td></tr><tr><td>Claude-2</td><td>5.0</td><td>5.0</td><td>4.5</td><td>3.0</td></tr><tr><td>LongChat-7b-v1.5-32k</td><td>5.3</td><td>5.0</td><td>3.1</td><td>2.5</td></tr><tr><td>ChatGLM2-6B-32k</td><td>0.9</td><td>0.7</td><td>0.2</td><td>0.9</td></tr><tr><td>ChatGLM3-6B-32k</td><td>2.3</td><td>2.4</td><td>2.0</td><td>0.7</td></tr><tr><td>Vicuna-7b-v1.5-16k</td><td>5.3</td><td>2.2</td><td>2.3</td><td>1.7</td></tr><tr><td>Vicuna-13b-v1.5-16k</td><td>5.4</td><td>5.0</td><td>2.4</td><td>3.1</td></tr><tr><td>InternLM2-7b</td><td>5.1</td><td>3.9</td><td>5.1</td><td>4.3</td></tr><tr><td>Random Guess</td><td>4.2</td><td>4.2</td><td>4.2</td><td>4.2</td></tr></table>
|
| 118 |
+
|
| 119 |
+
Table 2: TSort results under long-context settings. We fix the number of segments $\mathbf{N} = 4$ for TSort evaluation, thus random guess accuracy is roughly $4.2\%$ (1/24).
|
| 120 |
+
|
| 121 |
+
TSort. Table 2 displays the test accuracy of various LLMs on the TSort task. This evaluation underscores the complexity of TSort, highlighting its intricate nature that necessitates a comprehensive understanding and reasoning across long text. Under settings from 2,000 to 8,000 tokens, only the most powerful proprietary model GPT-4-Turbo outputs the correct order of texts with a significant higher probability compared to the random baseline. When the context window expands to 16,000, the quality of GPT-4-Turbo's predictions also deteriorates to the random guess level. Other LLMs, encompassing both proprietary models and open-source models, all displaying similar performance compared to random guess (even under the relative short 2k setting). The results indicate that the TSort task posts a severe challenge to existing LLMs.
|
| 122 |
+
|
| 123 |
+
BestAnswer. Table 3 presents the test accuracy of LLMs on BestAnswer. GPT-4-Turbo establishes the state-of-the-art on the BestAnswer benchmark. It achieves an outstanding $44.5\%$ accuracy under the 16k long-context setting, where around 100 distractor answers exist for each question. Among other proprietary models, Claude-2 achieves the second best accuracy $11\%$ under the 16k setting. GPT-3.5-Turbo-1106, while outperforming Claude-2 under some relative short settings (2k, 4k, 6k), demonstrates performance similar to random guess under the 16k setting. There is a considerable performance gap between proprietary models and open-source models on BestAnswer. Although some models like Vicuna-13b-v1.5-16k and InternLM2-7b perform well un
|
| 124 |
+
|
| 125 |
+
<table><tr><td>BestAnswer</td><td>1k</td><td>2k</td><td>4k</td><td>6k</td><td>8k</td><td>12k</td><td>16k</td></tr><tr><td>GPT-4-Turbo-0125</td><td>73.5</td><td>73.5</td><td>65.5</td><td>63.0</td><td>56.5</td><td>52.0</td><td>44.5</td></tr><tr><td>GPT-4-Turbo-1106</td><td>74.0</td><td>73.5</td><td>67.5</td><td>59.5</td><td>53.5</td><td>49.5</td><td>44.0</td></tr><tr><td>GPT-3.5-Turbo-1106</td><td>61.5</td><td>48.5</td><td>41.5</td><td>29.5</td><td>17.0</td><td>2.5</td><td>2.5</td></tr><tr><td>Claude-2</td><td>65.0</td><td>43.5</td><td>23.5</td><td>15.0</td><td>17.0</td><td>12.0</td><td>11.0</td></tr><tr><td>LongChat-7b-v1.5-32k</td><td>32.4</td><td>10.7</td><td>5.7</td><td>3.1</td><td>1.9</td><td>1.6</td><td>0.8</td></tr><tr><td>ChatGLM2-6B-32k</td><td>31.2</td><td>10.9</td><td>4.5</td><td>1.6</td><td>1.6</td><td>0.0</td><td>0.3</td></tr><tr><td>ChatGLM3-6B-32k</td><td>39.8</td><td>18.8</td><td>9.0</td><td>5.0</td><td>3.4</td><td>0.9</td><td>0.5</td></tr><tr><td>Vicuna-7b-v1.5-16k</td><td>37.0</td><td>11.1</td><td>5.8</td><td>3.2</td><td>1.8</td><td>1.9</td><td>1.0</td></tr><tr><td>Vicuna-13b-v1.5-16k</td><td>53.4</td><td>29.2</td><td>13.1</td><td>4.3</td><td>2.2</td><td>1.4</td><td>0.9</td></tr><tr><td>InternLM2-7b</td><td>58.6</td><td>49.5</td><td>33.9</td><td>12.3</td><td>13.4</td><td>2.0</td><td>0.8</td></tr><tr><td>Random Guess</td><td>26.7</td><td>10.1</td><td>4.5</td><td>3.0</td><td>2.3</td><td>1.4</td><td>1.1</td></tr></table>
|
| 126 |
+
|
| 127 |
+
der short settings, a dramatic accuracy decline can be observed when text length becomes larger.
|
| 128 |
+
|
| 129 |
+
Table 3: BestAnswer results under long-context settings. For a question with N candidate answers, we define the random guess accuracy as $1 / \mathbf{N}$ . The random guess accuracy over a long-context setting is the average of random guess accuracy for all questions within the test set.
|
| 130 |
+
|
| 131 |
+
<table><tr><td>CopyInst Rate</td><td>2k</td><td>4k</td><td>8k</td><td>16k</td></tr><tr><td>GPT-4-Turbo-1106</td><td>25.0</td><td>22.0</td><td>10.5</td><td>1.0</td></tr><tr><td>GPT-3.5-Turbo-1106</td><td>30.0</td><td>25.5</td><td>64.5</td><td>73.3</td></tr><tr><td>Claude-2</td><td>99.5</td><td>95.0</td><td>97.4</td><td>96.9</td></tr><tr><td>Expectation</td><td>5.0</td><td>5.0</td><td>5.0</td><td>5.5</td></tr><tr><td>LongChat-7b-v1.5-32k</td><td>100.0</td><td>99.8</td><td>99.1</td><td>100.0</td></tr><tr><td>ChatGLM2-6B-32k</td><td>11.3</td><td>13.8</td><td>10.5</td><td>81.3</td></tr><tr><td>ChatGLM3-6B-32k</td><td>21.6</td><td>54.8</td><td>88.0</td><td>88.1</td></tr><tr><td>Vicuna-7b-v1.5-16k</td><td>100.0</td><td>100.0</td><td>59.4</td><td>33.3</td></tr><tr><td>Vicuna-13b-v1.5-16k</td><td>96.6</td><td>99.0</td><td>12.2</td><td>3.1</td></tr><tr><td>Expectation</td><td>5.3</td><td>5.0</td><td>5.4</td><td>5.2</td></tr></table>
|
| 132 |
+
|
| 133 |
+
# 4.3 Error Breakdown
|
| 134 |
+
|
| 135 |
+
We further analyze the error instances on TSort and BestAnswer, and find that most errors can be attributed to two categories: 1. The LLM fails to follow the provided instruction and does not output a valid answer; 2. The LLM does output a valid answer. However, it simply copies the example answer we provide in the in-context example. Figure 2 display instruction following rate on TSort and BestAnswer. Tables 4 and 5 provide detailed statistics about the copy instruction rate on TSort and BestAnswer.
|
| 136 |
+
|
| 137 |
+
Table 4: The copy instruction rate of LLMs on TSort under long-context settings. Expectation means the ratio of test cases for which the in-context example answer is exactly the correct one.
|
| 138 |
+
|
| 139 |
+
<table><tr><td>CopyInst Rate</td><td>1k</td><td>2k</td><td>4k</td><td>6k</td><td>8k</td><td>12k</td><td>16k</td></tr><tr><td>GPT-4-Turbo-1106</td><td>12.5</td><td>8.5</td><td>5.0</td><td>5.5</td><td>6.0</td><td>2.0</td><td>2.0</td></tr><tr><td>GPT-3.5-Turbo-1106</td><td>16.5</td><td>22.5</td><td>18.5</td><td>16.0</td><td>11.5</td><td>2.0</td><td>0.0</td></tr><tr><td>Claude-2</td><td>21.5</td><td>25.5</td><td>40.5</td><td>41.0</td><td>42.5</td><td>49.0</td><td>55.0</td></tr><tr><td>Expectation</td><td>13.0</td><td>7.0</td><td>3.0</td><td>2.0</td><td>2.5</td><td>1.5</td><td>1.5</td></tr><tr><td>LongChat-7b-v1.5-32k</td><td>67.4</td><td>94.7</td><td>89.5</td><td>57.8</td><td>70.6</td><td>49.4</td><td>13.0</td></tr><tr><td>ChatGLM2-6B-32k</td><td>36.5</td><td>43.7</td><td>35.8</td><td>27.2</td><td>24.4</td><td>35.5</td><td>44.7</td></tr><tr><td>ChatGLM3-6B-32k</td><td>47.9</td><td>66.1</td><td>33.3</td><td>30.4</td><td>22.5</td><td>24.8</td><td>16.7</td></tr><tr><td>Vicuna-7b-v1.5-16k</td><td>63.1</td><td>96.2</td><td>91.8</td><td>57.9</td><td>66.6</td><td>27.8</td><td>17.9</td></tr><tr><td>Vicuna-13b-v1.5-16k</td><td>27.8</td><td>45.8</td><td>55.3</td><td>19.8</td><td>3.4</td><td>5.6</td><td>11.1</td></tr><tr><td>Expectation</td><td>14.4</td><td>10.0</td><td>5.1</td><td>2.3</td><td>1.7</td><td>1.3</td><td>1.2</td></tr></table>
|
| 140 |
+
|
| 141 |
+
Table 5: The copy instruction rate of LLMs on BestAnswer under long-context settings. Expectation means the ratio of test cases for which the in-context example answer is exactly the correct one.
|
| 142 |
+
|
| 143 |
+
The state-of-the-art GPT-4-Turbo maintains a relatively low copy instruction rate and impeccable instruction following rate on both tasks. Error instances of Claude-2, LongChat and Vicuna models are predominantly due to elevated Copy Instruction Rate, while ChatGLM models suffer from low instruction following rate. It is worth noting that all models, with the sole exception of GPT-4-Turbo, find it more difficult to follow the instruction on both tasks as text length increases.
|
| 144 |
+
|
| 145 |
+
# 4.4 Ultra-Long-Context Evaluation Results
|
| 146 |
+
|
| 147 |
+
We evaluate the following proprietary models under ultra-long-context settings. (1) GPT-4-Turbo-0125 (2) GPT-4-Turbo-1106 (3) Claude-2. (4) Claude-2.1. We also evaluate InternLM2-7b on BestAnswer benchmark under ultra-long-context settings. Due to high API calling expense, we test 50 samples under each ultra-long context setting. Table 6 demonstrates the result.
|
| 148 |
+
|
| 149 |
+
Though the evaluated models claim that they can understand long text up to $100,000+$ tokens (a whole book with hundreds of pages, e.g.), they suf
|
| 150 |
+
|
| 151 |
+

|
| 152 |
+
Figure 2: The instruction following rate of LLMs on TSort (Left) and BestAnswer (Right) under long-context settings. GPT-4-Turbo on TSort and all proprietary models on BestAnswer achieve $100\%$ instruction following rate across all long-context settings, thus not displayed.
|
| 153 |
+
|
| 154 |
+
<table><tr><td>Benchmark</td><td>Model</td><td>32k</td><td>64k</td><td>128k</td></tr><tr><td rowspan="5">TSort</td><td>GPT-4-Turbo-0125</td><td>2.0</td><td>4.0</td><td>2.0</td></tr><tr><td>GPT-4-Turbo-1106</td><td>6.0</td><td>6.0</td><td>6.0</td></tr><tr><td>Claude-2</td><td>0.0</td><td>0.0</td><td>/</td></tr><tr><td>Claude-2.1</td><td>0.0</td><td>0.0</td><td>0.0</td></tr><tr><td>Random Guess</td><td>4.2</td><td>4.2</td><td>4.2</td></tr><tr><td rowspan="6">BestAnswer</td><td>GPT-4-Turbo-0125</td><td>30.0</td><td>0.0</td><td>0.0</td></tr><tr><td>GPT-4-Turbo-1106</td><td>16.0</td><td>0.0</td><td>0.0</td></tr><tr><td>Claude-2</td><td>4.0</td><td>0.0</td><td>/</td></tr><tr><td>Claude-2.1</td><td>4.0</td><td>0.0</td><td>0.0</td></tr><tr><td>InternLM2-7b</td><td>0.5</td><td>0.5</td><td>0.0</td></tr><tr><td>Random Guess</td><td>0.6</td><td>0.3</td><td>0.1</td></tr></table>
|
| 155 |
+
|
| 156 |
+
fer from a dramatic decline on their performance under ultra-long-context settings, comparing to their long-context performance. For the TSort task, GPT-4-Turbo is able to achieve a random guess level accuracy, while Claude fails to give any correct answers. For BestAnswer, the performance of all three models fall sharply from 16k to 32k text length. Meanwhile, they can not give any correct answer when the text length is greater than 32k.
|
| 157 |
+
|
| 158 |
+
# 4.5 Ablation Study
|
| 159 |
+
|
| 160 |
+
# 4.5.1 Perplexity Evaluation on TSort
|
| 161 |
+
|
| 162 |
+
Perplexity (PPL) evaluation is frequently adopted to assess the capability of LLMs. During inference, models compute the perplexity of multiple candidates and the one with the lowest perplexity is selected as the inference result. For TSort, we create 24 candidates for perplexity computation, each candidate is a permutation of the 4 text segments.
|
| 163 |
+
|
| 164 |
+
We conduct PPL-based evaluation for open-source LLMs on 2k, 4k and 8k text length settings. Table 7 exhibits the PPL-Eval result on TSort. When text segments are arranged in the correct order, a significantly lower perplexity score can usually be observed<sup>5</sup>, resulting in the high TSort accuracy. However, when the sorting task is presented as QAs where LLMs are asked to directly output the correct order, the performance significantly deteriorates, indicating the limited instruction following capabilities of existing LLMs.
|
| 165 |
+
|
| 166 |
+
Table 6: Results of LLMs on TSort and BestAnswer benchmarks in ultra-long context settings.
|
| 167 |
+
|
| 168 |
+
<table><tr><td>TSort (PPL Eval)</td><td>2k</td><td>4k</td><td>8k</td></tr><tr><td>LongChat-7b-v1.5-32k</td><td>60.9</td><td>68.3</td><td>77.4</td></tr><tr><td>ChatGLM2-6B-32k</td><td>40.5</td><td>53.5</td><td>57.5</td></tr><tr><td>ChatGLM3-6B-32k</td><td>50.1</td><td>57.0</td><td>59.3</td></tr><tr><td>Vicuna-7b-v1.5-16k</td><td>70.1</td><td>78.3</td><td>77.7</td></tr><tr><td>Vicuna-13b-v1.5-16k</td><td>79.3</td><td>86.7</td><td>89.2</td></tr><tr><td>Random Guess</td><td>4.2</td><td>4.2</td><td>4.2</td></tr></table>
|
| 169 |
+
|
| 170 |
+
Table 7: Perplexity Evaluation Results on TSort for open-source LLMs.
|
| 171 |
+
|
| 172 |
+
# 4.5.2 Position Bias in BestAnswer
|
| 173 |
+
|
| 174 |
+
To study the position bias of existing LLMs, in BestAnswer, we keep questions and answer candidates the same and alter the position of groundtruth answers. Specifically, we manually set the groundtruth answer at the beginning, in the middle, or at the rear of all answers and then perform the evaluation. Table 8 displays the evaluation
|
| 175 |
+
|
| 176 |
+
results. All models demonstrate significant position bias in choosing the most helpful answer. Most models achieve much better accuracy when the most helpful answer presents at the beginning. Claude-2 has some unique behaviors. It performs the best when the groundtruth is positioned at the rear across 4 of 5 different settings. As the input length increases, the position bias becomes more obvious. For instance, Vicuna-7b-v1.5-16k demonstrates relatively uniform accuracy under the 1k setting. However, when the input length extends to 16k tokens, the model's performance remains stable only when the best answer is at the front.
|
| 177 |
+
|
| 178 |
+
<table><tr><td>BestAnswer</td><td>Pos</td><td>1k</td><td>2k</td><td>4k</td><td>8k</td><td>16k</td></tr><tr><td rowspan="3">GPT-4-Turbo-1106</td><td>front</td><td>76.5</td><td>82.5</td><td>86.5</td><td>90.0</td><td>82.0</td></tr><tr><td>mid</td><td>74.5</td><td>68.0</td><td>60.0</td><td>38.0</td><td>38.5</td></tr><tr><td>rear</td><td>57.5</td><td>46.6</td><td>44.0</td><td>40.5</td><td>26.5</td></tr><tr><td rowspan="3">GPT-3.5-Turbo-1106</td><td>front</td><td>77.0</td><td>80.5</td><td>77.0</td><td>46.5</td><td>2.5</td></tr><tr><td>mid</td><td>64.5</td><td>48.5</td><td>32.0</td><td>9.5</td><td>0.5</td></tr><tr><td>rear</td><td>37.5</td><td>19.0</td><td>8.5</td><td>6.0</td><td>3.5</td></tr><tr><td rowspan="3">Claude-2</td><td>front</td><td>34.0</td><td>19.0</td><td>14.5</td><td>50.0</td><td>6.0</td></tr><tr><td>mid</td><td>49.0</td><td>35.5</td><td>21.5</td><td>13.0</td><td>5.0</td></tr><tr><td>rear</td><td>59.0</td><td>36.5</td><td>26.0</td><td>11.0</td><td>9.5</td></tr><tr><td rowspan="3">LongChat-7b-v1.5-32k</td><td>front</td><td>24.1</td><td>5.0</td><td>12.1</td><td>33.6</td><td>29.0</td></tr><tr><td>mid</td><td>32.7</td><td>13.6</td><td>0.2</td><td>0.2</td><td>0.0</td></tr><tr><td>rear</td><td>29.8</td><td>1.9</td><td>0.0</td><td>0.1</td><td>0.1</td></tr><tr><td rowspan="3">ChatGLM2-6B-32k</td><td>front</td><td>30.0</td><td>31.5</td><td>46.2</td><td>10.5</td><td>0.5</td></tr><tr><td>mid</td><td>27.7</td><td>10.4</td><td>1.0</td><td>0.1</td><td>0.1</td></tr><tr><td>rear</td><td>28.5</td><td>12.4</td><td>2.6</td><td>4.1</td><td>0.0</td></tr><tr><td rowspan="3">ChatGLM3-6B-32k</td><td>front</td><td>48.9</td><td>34.3</td><td>37.6</td><td>35.8</td><td>19.0</td></tr><tr><td>mid</td><td>41.9</td><td>22.3</td><td>5.3</td><td>0.9</td><td>0.1</td></tr><tr><td>rear</td><td>28.8</td><td>5.4</td><td>3.7</td><td>8.8</td><td>2.9</td></tr><tr><td rowspan="3">Vicuna-7b-v1.5-16k</td><td>front</td><td>29.3</td><td>8.9</td><td>14.0</td><td>37.6</td><td>25.4</td></tr><tr><td>mid</td><td>32.8</td><td>13.6</td><td>0.0</td><td>0.0</td><td>0.2</td></tr><tr><td>rear</td><td>34.2</td><td>2.1</td><td>0.0</td><td>0.0</td><td>0.7</td></tr><tr><td rowspan="3">Vicuna-13b-v1.5-16k</td><td>front</td><td>52.5</td><td>51.4</td><td>58.6</td><td>81.7</td><td>11.8</td></tr><tr><td>mid</td><td>64.5</td><td>29.2</td><td>1.5</td><td>0.5</td><td>0.3</td></tr><tr><td>rear</td><td>34.2</td><td>2.4</td><td>0.0</td><td>0.0</td><td>13.4</td></tr></table>
|
| 179 |
+
|
| 180 |
+
Table 8: Results of LLMs on BestAnswer where the best answer is set at the front, in the middle and at the rear of all answers. Pos denotes the position of the best answer.
|
| 181 |
+
|
| 182 |
+
# 4.5.3 Scalable Position Embeddings
|
| 183 |
+
|
| 184 |
+
Scalable position embeddings have shown their value in extending context window while requiring minimal or no fine-tuning steps. Existing position embedding methods for context window extension can be categorized into two major categories: position interpolation and length extrapolation. NTK-aware Scaled RoPE utilizes the advantage of both methods by changing the base of RoPE. ReRoPE and Leaky ReRoPE (Su, 2023) design a window size to control the application of scalable position embeddings directly. We conduct our study on Vicuna-v1.5 models (Zheng et al., 2023), which
|
| 185 |
+
|
| 186 |
+
are Llama 2 fine-tuned with 4k context window. We adopt original models (4k context window) as the baseline across all settings. Table 9 shows the result of different position embedding methods on the BestAnswer benchmark. Our findings indicate that scalable position embeddings do improve the long-context modeling capability. All methods enhance the accuracy under the 8k setting, which is beyond the original context window. Concurrently, the model performance under short settings (1k, e.g.) is basically retained. NTK-aware Scaled RoPE diminishes performance on 1k context length, but outperforms other two methods on longer context. The advantage of these methods is more obvious on Vicuna-13b-v1.5. Moreover, comparing to their 16k versions, which utilize Flash Attention and are further trained on high-quality 16k length conversation data, advanced scalable position embeddings still achieve comparable performance.
|
| 187 |
+
|
| 188 |
+
<table><tr><td>Vicuna-7b-v1.5</td><td>1k</td><td>2k</td><td>4k</td><td>8k</td></tr><tr><td>ReRoPE</td><td>39.6/39.6</td><td>11.6/11.6</td><td>4.7/5.4</td><td>2.3/3.2</td></tr><tr><td>Leaky ReRoPE</td><td>39.9/39.9</td><td>11.2/11.2</td><td>5.1/5.7</td><td>1.3/2.0</td></tr><tr><td>NTK</td><td>32.5/32.5</td><td>10.7/10.7</td><td>5.8/5.8</td><td>3.9/3.9</td></tr><tr><td>Original(4k)</td><td>39.5/39.5</td><td>9.8/11.0</td><td>4.2/5.5</td><td>0.0/0.0</td></tr><tr><td>Original(16k)</td><td>37.0/39.5</td><td>11.1/11.1</td><td>5.8/5.8</td><td>2.5/2.7</td></tr><tr><td>Vicuna-13b-v1.5</td><td>1k</td><td>2k</td><td>4k</td><td>8k</td></tr><tr><td>ReRoPE</td><td>49.2/49.2</td><td>22.5/22.5</td><td>9.2/10.0</td><td>1.5/2.8</td></tr><tr><td>Leaky ReRoPE</td><td>49.3/49.3</td><td>23.8/23.8</td><td>8.7/9.8</td><td>1.3/2.6</td></tr><tr><td>NTK</td><td>43.8/43.8</td><td>23.0/23.0</td><td>11.1/11.1</td><td>2.3/2.3</td></tr><tr><td>Original(4k)</td><td>49.1/49.1</td><td>17.7/17.7</td><td>5.9/5.9</td><td>0.1/1.0</td></tr><tr><td>Original(16k)</td><td>53.4/53.4</td><td>29.2/29.2</td><td>13.1/13.5</td><td>2.6/2.7</td></tr></table>
|
| 189 |
+
|
| 190 |
+
Table 9: Results of Vicuna-v1.5 with different context window extrapolation methods on BestAnswer. 'Original (4k) / (16k)' denotes the original Vicuna model trained with 4k / 16k context lengths. In the reported 'X/Y', X indicates the accuracy while Y indicates the accuracy which cases failed to follow the instruction are excluded.
|
| 191 |
+
|
| 192 |
+
# 4.5.4 Comparison with Other Long-Context Benchmarks
|
| 193 |
+
|
| 194 |
+
We compare Ada-LEval with other long-context benchmarks to validate that our benchmarks require much overall text understanding to complete the task than traditional long-context benchmarks.
|
| 195 |
+
|
| 196 |
+
We regard a task requires models to understand text comprehensively if the performances of models decrease sharply when the text is truncated. TSort task meets this requirement since truncating any segment will lead to an incorrect answer.
|
| 197 |
+
|
| 198 |
+
To exhibit the BestAnswer requires more comprehensive text understanding than traditional QA
|
| 199 |
+
|
| 200 |
+
and summarization tasks, we conduct an experiment on BestAnswer(16k version) and 2 classic long-context datasets, NarrativeQA(LongBench subset, QA task) and GovReport(LongBench subset, summarization task) respectively. The metric for NarrativeQA is F1 score and metric for GovReport is Rouge-L. We evaluate the performance of GPT-4-Turbo-1106 on all 3 datasets. Each test case is truncated into 2k, 4k and 8k version as the input. We also provide its full version for comparison.
|
| 201 |
+
|
| 202 |
+
<table><tr><td>Benchmark</td><td>2k</td><td>4k</td><td>8k</td><td>Full</td><td>Avg #tokens</td></tr><tr><td>BestAnswer</td><td>11.0</td><td>20.0</td><td>31.5</td><td>44.0</td><td>15646</td></tr><tr><td>NarrativeQA</td><td>24.7</td><td>25.6</td><td>29.7</td><td>33.1</td><td>10276</td></tr><tr><td>GovReport</td><td>30.7</td><td>32.4</td><td>33.6</td><td>30.9</td><td>29872</td></tr></table>
|
| 203 |
+
|
| 204 |
+
Table 10: Results of GPT-4-Turbo on different long-context benchmarks.
|
| 205 |
+
|
| 206 |
+
From the table 10, the performance of GPT-4-Turbo on BestAnswer decreases more dramatically than NarrativeQA and GovReport when text is truncated. Notably, the performance on GovReport even increases when text is truncated into $4\mathrm{k}$ and $8\mathrm{k}$ . Therefore, our benchmarks require more full-text comprehension than traditional QA and summarization tasks.
|
| 207 |
+
|
| 208 |
+
# 5 Conclusion
|
| 209 |
+
|
| 210 |
+
In this paper, we introduce Ada-LEval, a length-adaptable dataset to assess long-context capability of LLMs. We conduct comprehensive experiments on multiple LLMs and find that all open-source models still lag significantly behind state-of-the-art proprietary models in terms of long context capability. When the input length scales to 4,000 tokens, most open-source models rapidly deteriorates to random guess level. In the meanwhile, the capability of proprietary models is also severely limited. When it comes to the ultra-long setting (32,000+ tokens), no proprietary model notably outperforms the random baseline. Ada-LEval is the first benchmark that evaluates LLMs under the ultra-long setting, and we hope that the limitations pointed out by this benchmarks can serve as valuable references for future developments of long-context LLMs.
|
| 211 |
+
|
| 212 |
+
Acknowledgement. This project is supported by the National Key R&D Program of China No.2022ZD0161600 and the Shanghai Postdoctoral Excellence Program (No.2023023).
|
| 213 |
+
|
| 214 |
+
# 6 Limitations
|
| 215 |
+
|
| 216 |
+
Ada-LEval is a challenging benchmark, requiring strong understanding and reasoning capabilities over long text. Due to the poor instruction following rate and copy instruction rate of open-source LLMs, Ada-LEval can hardly distinguish their long context capability through the accuracy metric.
|
| 217 |
+
|
| 218 |
+
Furthermore, as text length increases, the difficulty of Ada-LEval rises sharply under ultra-long-context settings. Even state-of-the-art proprietary models are not able to achieve an ideal performance, which further constrains its applicability to current LLMs.
|
| 219 |
+
|
| 220 |
+
# References
|
| 221 |
+
|
| 222 |
+
Chenxin An, Shansan Gong, Ming Zhong, Mukai Li, Jun Zhang, Lingpeng Kong, and Xipeng Qiu. 2023. L-eval: Instituting standardized evaluation for long context language models. arXiv preprint arXiv:2307.11088.
|
| 223 |
+
Yushi Bai, Xin Lv, Jiajie Zhang, Hongchang Lyu, Jiankai Tang, Zhidian Huang, Zhengxiao Du, Xiao Liu, Aohan Zeng, Lei Hou, et al. 2023. Longbench: A bilingual, multitask benchmark for long context understanding. arXiv preprint arXiv:2308.14508.
|
| 224 |
+
Zheng Cai, Maosong Cao, Haojiong Chen, Kai Chen, Keyu Chen, Xin Chen, Xun Chen, Zehui Chen, Zhi Chen, Pei Chu, Xiaoyi Dong, Haodong Duan, Qi Fan, Zhaoye Fei, Yang Gao, Jiaye Ge, Chenya Gu, Yuzhe Gu, Tao Gui, Aijia Guo, Qipeng Guo, Conghui He, Yingfan Hu, Ting Huang, Tao Jiang, Penglong Jiao, Zhenjiang Jin, Zhikai Lei, Jiaxing Li, Jingwen Li, Linyang Li, Shuaibin Li, Wei Li, Yining Li, Hongwei Liu, Jiangning Liu, Jiawei Hong, Kaiwen Liu, Kuikun Liu, Xiaoran Liu, Chengqi Lv, Hajun Lv, Kai Lv, Li Ma, Runyuan Ma, Zerun Ma, Wenchang Ning, Linke Ouyang, Jiantao Qiu, Yuan Qu, Fukai Shang, Yunfan Shao, Demin Song, Zifan Song, Zhihao Sui, Peng Sun, Yu Sun, Huanze Tang, Bin Wang, Guoteng Wang, Jiaqi Wang, Jiayu Wang, Rui Wang, Yudong Wang, Ziyi Wang, Xingjian Wei, Qizhen Weng, Fan Wu, Yingtong Xiong, Chao Xu, Ruiliang Xu, Hang Yan, Yirong Yan, Xiaogui Yang, Haochen Ye, Huaiyuan Ying, Jia Yu, Jing Yu, Yuhang Zang, Chuyu Zhang, Li Zhang, Pan Zhang, Peng Zhang, Ruijie Zhang, Shuo Zhang, Songyang Zhang, Wenjian Zhang, Wenwei Zhang, Xingcheng Zhang, Xinyue Zhang, Hui Zhao, Qian Zhao, Xiaomeng Zhao, Fengzhe Zhou, Zaida Zhou, Jingming Zhuo, Yicheng Zou, Xipeng Qiu, Yu Qiao and Dahua Lin. 2024. Internlm2 technical report.
|
| 225 |
+
Howard Chen, Ramakanth Pasunuru, Jason Weston, and Asli Celikyilmaz. 2023a. Walking down the memory maze: Beyond context limit through interactive reading. arXiv preprint arXiv:2310.05029.
|
| 226 |
+
|
| 227 |
+
Shouyuan Chen, Sherman Wong, Liangjian Chen, and Yuandong Tian. 2023b. Extending context window of large language models via positional interpolation. arXiv preprint arXiv:2306.15595.
|
| 228 |
+
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. 2021. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168.
|
| 229 |
+
OpenCompass Contributors. 2023. Opencompass: A universal evaluation platform for foundation models. https://github.com/open-compass/opencompass.
|
| 230 |
+
Tri Dao, Dan Fu, Stefano Ermon, Atri Rudra, and Christopher Ré. 2022a. Flashattention: Fast and memory-efficient exact attention with io-awareness. Advances in Neural Information Processing Systems, 35:16344-16359.
|
| 231 |
+
Tri Dao, Daniel Y Fu, Khaled K Saab, Armin W Thomas, Atri Rudra, and Christopher Ré. 2022b. Hungry hungry hippos: Towards language modeling with state space models. arXiv preprint arXiv:2212.14052.
|
| 232 |
+
Pradeep Dasigi, Kyle Lo, Iz Beltagy, Arman Cohan, Noah A Smith, and Matt Gardner. 2021. A dataset of information-seeking questions and answers anchored in research papers. arXiv preprint arXiv:2105.03011.
|
| 233 |
+
Tim Dettmers, Mike Lewis, Younes Belkada, and Luke Zettlemoyer. 2022. Llm. int8(): 8-bit matrix multiplication for transformers at scale. arXiv preprint arXiv:2208.07339.
|
| 234 |
+
Jiayu Ding, Shuming Ma, Li Dong, Xingxing Zhang, Shaohan Huang, Wenhui Wang, and Furu Wei. 2023. Longnet: Scaling transformers to 1,000,000,000 tokens. arXiv preprint arXiv:2307.02486.
|
| 235 |
+
Elias Frantar, Saleh Ashkboos, Torsten Hoefler, and Dan Alistarh. 2022. Gptq: Accurate post-training quantization for generative pre-trained transformers. arXiv preprint arXiv:2210.17323.
|
| 236 |
+
Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, and Yinfei Yang. 2021. Longt5: Efficient text-to-text transformer for long sequences. arXiv preprint arXiv:2112.07916.
|
| 237 |
+
Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2020. Measuring massive multitask language understanding. arXiv preprint arXiv:2009.03300.
|
| 238 |
+
Luyang Huang, Shuyang Cao, Nikolaus Parulian, Heng Ji, and Lu Wang. 2021. Efficient attentions for long document summarization. arXiv preprint arXiv:2104.02112.
|
| 239 |
+
Yuzhen Huang, Yuzhuo Bai, Zhihao Zhu, Junlei Zhang, Jinghan Zhang, Tangjun Su, Junteng Liu, Chuancheng Lv, Yikai Zhang, Jiayi Lei, et al. 2023.
|
| 240 |
+
|
| 241 |
+
C-eval: A multi-level multi-discipline chinese evaluation suite for foundation models. arXiv preprint arXiv:2305.08322.
|
| 242 |
+
Tomáš Kočisky, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, Gábor Melis, and Edward Grefenstette. 2018. The narrativeqa reading comprehension challenge. Transactions of the Association for Computational Linguistics, 6:317-328.
|
| 243 |
+
Wojciech Krysciński, Nazneen Rajani, Divyansh Agarwal, Caiming Xiong, and Dragomir Radev. 2021. Booksum: A collection of datasets for long-form narrative summarization. arXiv preprint arXiv:2105.08209.
|
| 244 |
+
Dacheng Li*, Rulin Shao*, Anze Xie, Ying Sheng, Lian-min Zheng, Joseph E. Gonzalez, Ion Stoica, Xuezhe Ma, , and Hao Zhang. 2023. How long can opensource llms truly promise on context length?
|
| 245 |
+
Nelson F Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, and Percy Liang. 2023a. Lost in the middle: How language models use long contexts. arXiv preprint arXiv:2307.03172.
|
| 246 |
+
Tianyang Liu, Canwen Xu, and Julian McAuley. 2023b. Repobench: Benchmarking repository-level code auto-completion systems. arXiv preprint arXiv:2306.03091.
|
| 247 |
+
Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, et al. 2021. Webgpt: Browser-assisted question-answering with human feedback. arXiv preprint arXiv:2112.09332.
|
| 248 |
+
|
| 249 |
+
OpenAI. 2023. Gpt-4 technical report.
|
| 250 |
+
|
| 251 |
+
Ofir Press, Noah A Smith, and Mike Lewis. 2021. Train short, test long: Attention with linear biases enables input length extrapolation. arXiv preprint arXiv:2108.12409.
|
| 252 |
+
Uri Shaham, Elad Segal, Maor Ivgi, Avia Efrat, Ori Yoran, Adi Haviv, Ankit Gupta, Wenhan Xiong, Mor Geva, Jonathan Berant, et al. 2022. **Scrolls: Standardized comparison over long language sequences. arXiv preprint arXiv:2201.03533.**
|
| 253 |
+
Jianlin Su. 2023. Rectified rotary position embeddings. https://github.com/bojone/erope.
|
| 254 |
+
Jianlin Su, Yu Lu, Shengfeng Pan, Ahmed Murtadha, Bo Wen, and Yunfeng Liu. 2021. Roformer: Enhanced transformer with rotary position embedding. arXiv preprint arXiv:2104.09864.
|
| 255 |
+
Simeng Sun, Yang Liu, Shuohang Wang, Chenguang Zhu, and Mohit Iyyer. 2023. Pearl: Prompting large language models to plan and execute actions over long documents. arXiv preprint arXiv:2305.14564.
|
| 256 |
+
|
| 257 |
+
Yutao Sun, Li Dong, Barun Patra, Shuming Ma, Shaohan Huang, Alon Benhaim, Vishrav Chaudhary, Xia Song, and Furu Wei. 2022. A length-extrapolatable transformer. arXiv preprint arXiv:2212.10554.
|
| 258 |
+
|
| 259 |
+
Mirac Suzgun, Nathan Scales, Nathanael Scharli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc V Le, Ed H Chi, Denny Zhou, et al. 2022. Challenging big-bench tasks and whether chain-of-thought can solve them. arXiv preprint arXiv:2210.09261.
|
| 260 |
+
|
| 261 |
+
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288.
|
| 262 |
+
|
| 263 |
+
Manzil Zaheer, Guru Guruganesh, Kumar Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, et al. 2020. Big bird: Transformers for longer sequences. Advances in neural information processing systems, 33:17283-17297.
|
| 264 |
+
|
| 265 |
+
Aohan Zeng, Xiao Liu, Zhengxiao Du, Zihan Wang, Hanyu Lai, Ming Ding, Zhuoyi Yang, Yifan Xu, Wendi Zheng, Xiao Xia, et al. 2022. Glm-130b: An open bilingual pre-trained model. arXiv preprint arXiv:2210.02414.
|
| 266 |
+
|
| 267 |
+
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. 2023. Judging llm-as-a-judge with mt-bench and chatbot arena. arXiv preprint arXiv:2306.05685.
|
| 268 |
+
|
| 269 |
+
# A Test Case Building Statistics
|
| 270 |
+
|
| 271 |
+
Recall that for each case length on Tsort task, we set the length upper limit for each text segment and the neighboring paragraphs before and after these contiguous chapters. We also set stride between beginning paragraphs. Table 11 demonstrates the detail statistics on the length upper limit and the stride.
|
| 272 |
+
|
| 273 |
+
On BestAnswer task, two questions are regarded as similar questions when they have $40\%$ tags in common. Under ultra-long-context settings, both questions should contain at least 1 tag in common
|
| 274 |
+
|
| 275 |
+
# B Evaluation Setups
|
| 276 |
+
|
| 277 |
+
Evaluation Hyperparameters. For open-source LLMs, we adopt their default hyperparameters during evaluation on Ada-LEval. For proprietary models including GPT-4-Turbo, GPT-3.5-Turbo-1106, we set the temperature to 0.
|
| 278 |
+
|
| 279 |
+
Computational Budget. Our experiments for open-source LLMs are conducted on NVIDIA A100 80GB GPU. The entire evaluation consumes around 800 GPU-hours.
|
| 280 |
+
|
| 281 |
+
Benchmark Instructions. We present instructions of both tasks within Ada-LEval. To ensure that models know what to do, we contain the sample input and output format that models need to follow in solving problems. The instructions are shown below.
|
| 282 |
+
|
| 283 |
+
Validity of 200-testcase subset. Our experiments on long-context settings adopt 200-testcase subset for proprietary models and 1000-testcase subset for open-source LLMs. To ensure that evaluation results on 200-testcase subset is valid, Table 12 and Table 13 display results on 200-testcase subset.
|
| 284 |
+
|
| 285 |
+
<table><tr><td>Setting</td><td>Before</td><td>Segments</td><td>After</td><td>Stride</td></tr><tr><td>2k</td><td>200</td><td>350</td><td>200</td><td>64</td></tr><tr><td>4k</td><td>300</td><td>800</td><td>300</td><td>64</td></tr><tr><td>8k</td><td>400</td><td>1750</td><td>400</td><td>64</td></tr><tr><td>16k</td><td>500</td><td>3700</td><td>500</td><td>64</td></tr><tr><td>32k</td><td>500</td><td>7700</td><td>500</td><td>128</td></tr><tr><td>64k</td><td>500</td><td>15700</td><td>500</td><td>128</td></tr><tr><td>128k</td><td>500</td><td>31700</td><td>500</td><td>128</td></tr></table>
|
| 286 |
+
|
| 287 |
+
Table 11: The length upper limit of text segments and stride between beginning paragraphs on TSort.
|
| 288 |
+
|
| 289 |
+
<table><tr><td>TSort (200-testcase)</td><td>2k</td><td>4k</td><td>8k</td><td>16k</td></tr><tr><td>GPT-4-Turbo-0125</td><td>15.5</td><td>16.5</td><td>8.5</td><td>5.5</td></tr><tr><td>GPT-4-Turbo-1106</td><td>18.5</td><td>15.5</td><td>7.5</td><td>3.5</td></tr><tr><td>GPT-3.5-Turbo-1106</td><td>4.0</td><td>4.5</td><td>4.5</td><td>5.5</td></tr><tr><td>Claude-2</td><td>5.0</td><td>5.0</td><td>4.5</td><td>3.0</td></tr><tr><td>LongChat-7b-v1.5-32k</td><td>5.0</td><td>5.0</td><td>2.5</td><td>2.0</td></tr><tr><td>ChatGLM2-6B-32k</td><td>1.0</td><td>0.5</td><td>0.5</td><td>1.0</td></tr><tr><td>ChatGLM3-6B-32k</td><td>3.5</td><td>3.0</td><td>1.0</td><td>0.5</td></tr><tr><td>Vicuna-7b-v1.5-16k</td><td>5.0</td><td>1.5</td><td>1.0</td><td>2.5</td></tr><tr><td>Vicuna-13b-v1.5-16k</td><td>5.0</td><td>5.0</td><td>3.0</td><td>4.0</td></tr><tr><td>Random Guess</td><td>4.2</td><td>4.2</td><td>4.2</td><td>4.2</td></tr></table>
|
| 290 |
+
|
| 291 |
+
Table 12: TSort results under long-context settings(200-testcase subset).
|
| 292 |
+
|
| 293 |
+
# TSort:
|
| 294 |
+
|
| 295 |
+
You are an AI assistant. Your job is to sort multiple book sections into the correct order.
|
| 296 |
+
|
| 297 |
+
Each time, you will be provided with 4 pieces of text.
|
| 298 |
+
|
| 299 |
+
These texts form a continuous part of a book, but are provided in random order.
|
| 300 |
+
|
| 301 |
+
You need to find the correct order and return the answer in a string.
|
| 302 |
+
|
| 303 |
+
For example, if you output [4, 1, 3, 2], that means the correct order is: Part 4 -> Part 1 -> Part 3 -> Part 2.
|
| 304 |
+
|
| 305 |
+
You will also be provided with the neighboring paragraphs before and after the 4 pieces of texts.
|
| 306 |
+
|
| 307 |
+
The case sample is shown below and you should give me the answer in the format exactly the same as the sample.
|
| 308 |
+
|
| 309 |
+
However, you should NOT focus on the content of sample answer.
|
| 310 |
+
|
| 311 |
+
Please do NOT output any extra content.
|
| 312 |
+
|
| 313 |
+
Sample Input (format only):
|
| 314 |
+
|
| 315 |
+
Before: XXX (Text before the continuous book part)
|
| 316 |
+
|
| 317 |
+
Part 1: XXX
|
| 318 |
+
|
| 319 |
+
Part 2: XXX
|
| 320 |
+
|
| 321 |
+
Part 3: XXX
|
| 322 |
+
|
| 323 |
+
Part 4: XXX
|
| 324 |
+
|
| 325 |
+
After: XXX (Text after the continuous book part)
|
| 326 |
+
|
| 327 |
+
Sample Output (format only):
|
| 328 |
+
|
| 329 |
+
Answer: [4, 1, 3, 2]
|
| 330 |
+
|
| 331 |
+
# BestAnswer:
|
| 332 |
+
|
| 333 |
+
You are an AI assistant. Your job is to find out the most helpful answer to a given question.
|
| 334 |
+
|
| 335 |
+
Each time, you will be provided with a question and n answers to this question.
|
| 336 |
+
|
| 337 |
+
Each answer begins with an 'A' and a number (e.g. A4), which represents its designation.
|
| 338 |
+
|
| 339 |
+
You need to determine which answer is the most helpful one to the question.
|
| 340 |
+
|
| 341 |
+
The case sample is shown below and you should give me the answer in the format exactly the same as the sample.
|
| 342 |
+
|
| 343 |
+
However, you should NOT focus on the content of sample answer.
|
| 344 |
+
|
| 345 |
+
Sample Input (format only):
|
| 346 |
+
|
| 347 |
+
The question is given below.
|
| 348 |
+
|
| 349 |
+
XXX(The content of question)
|
| 350 |
+
|
| 351 |
+
Possible answers are given below.
|
| 352 |
+
|
| 353 |
+
A1:
|
| 354 |
+
|
| 355 |
+
XXX(The content of answer 1)
|
| 356 |
+
|
| 357 |
+
A2:
|
| 358 |
+
|
| 359 |
+
XXX(The content of answer 2)
|
| 360 |
+
|
| 361 |
+
.
|
| 362 |
+
|
| 363 |
+
.
|
| 364 |
+
|
| 365 |
+
An:
|
| 366 |
+
|
| 367 |
+
XXX(The content of answer n)
|
| 368 |
+
|
| 369 |
+
Now the answers are over, please decide which answer is the most helpful one to the question. You must give me only the designation of the MOST helpful answer.
|
| 370 |
+
|
| 371 |
+
Sample Output (format only):
|
| 372 |
+
|
| 373 |
+
Answer: The designation of the most helpful answer.(e.g. A4 means answer 4 is the most helpful answer)
|
| 374 |
+
|
| 375 |
+
<table><tr><td>BestAnswer (200-testcase)</td><td>1k</td><td>2k</td><td>4k</td><td>6k</td><td>8k</td><td>12k</td><td>16k</td></tr><tr><td>GPT-4-Turbo-0125</td><td>73.5</td><td>73.5</td><td>65.5</td><td>63.0</td><td>56.5</td><td>52.0</td><td>44.5</td></tr><tr><td>GPT-4-Turbo-1106</td><td>74.0</td><td>73.5</td><td>67.5</td><td>59.5</td><td>53.5</td><td>49.5</td><td>44.0</td></tr><tr><td>GPT-3.5-turbo-1106</td><td>61.5</td><td>48.5</td><td>41.5</td><td>29.5</td><td>17.0</td><td>2.5</td><td>2.5</td></tr><tr><td>Claude-2</td><td>65.0</td><td>43.5</td><td>23.5</td><td>15.0</td><td>17.0</td><td>12.0</td><td>11.0</td></tr><tr><td>LongChat-7b-v1.5-32k</td><td>32.5</td><td>8.0</td><td>3.5</td><td>3.0</td><td>2.5</td><td>1.5</td><td>1.0</td></tr><tr><td>ChatGLM2-6B-32k</td><td>36.0</td><td>10.5</td><td>3.0</td><td>0.5</td><td>1.5</td><td>0.0</td><td>0.0</td></tr><tr><td>ChatGLM3-6B-32k</td><td>37.0</td><td>15.5</td><td>5.5</td><td>4.0</td><td>5.5</td><td>0.5</td><td>0.5</td></tr><tr><td>Vicuna-7b-v1.5-16k</td><td>32.5</td><td>8.5</td><td>2.5</td><td>3.5</td><td>3.0</td><td>0.5</td><td>2.0</td></tr><tr><td>Vicuna-13b-v1.5-16k</td><td>52.0</td><td>29.0</td><td>11.0</td><td>4.0</td><td>1.5</td><td>1.0</td><td>1.5</td></tr><tr><td>Random Guess</td><td>26.7</td><td>10.1</td><td>4.5</td><td>3.0</td><td>2.3</td><td>1.4</td><td>1.1</td></tr></table>
|
| 376 |
+
|
| 377 |
+
Table 13: BestAnswer results under long-context settings(200-testcase subset).
|
2024/Ada-LEval_ Evaluating long-context LLMs with length-adaptable benchmarks/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7f8c58cb40f83c8f01e419a37b885587ccc937f6d0df953f3085eac5590d5374
|
| 3 |
+
size 689259
|
2024/Ada-LEval_ Evaluating long-context LLMs with length-adaptable benchmarks/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/Adaptive Cross-lingual Text Classification through In-Context One-Shot Demonstrations/ae152a93-ecfb-4a6c-b85d-f885c8997f05_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/Adaptive Cross-lingual Text Classification through In-Context One-Shot Demonstrations/ae152a93-ecfb-4a6c-b85d-f885c8997f05_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/Adaptive Cross-lingual Text Classification through In-Context One-Shot Demonstrations/ae152a93-ecfb-4a6c-b85d-f885c8997f05_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e4bb0bcda6af4091ded03308a7b2081236554eb67a7873d9c684e79aadd15ae8
|
| 3 |
+
size 2579240
|
2024/Adaptive Cross-lingual Text Classification through In-Context One-Shot Demonstrations/full.md
ADDED
|
@@ -0,0 +1,422 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Adaptive Cross-lingual Text Classification through In-Context One-Shot Demonstrations
|
| 2 |
+
|
| 3 |
+
Emilio Villa-Cueva $^{1,2}$ , A. Pastor López-Monroy $^{1}$ , Fernando Sánchez-Vega $^{1,4}$ , Thamar Solorio $^{2,3}$
|
| 4 |
+
|
| 5 |
+
<sup>1</sup>Mathematics Research Center (CIMAT), Gto, Mexico,
|
| 6 |
+
|
| 7 |
+
$^{2}$ Mohamed bin Zayed University of Artificial Intelligence, UAE,
|
| 8 |
+
|
| 9 |
+
<sup>3</sup>University of Houston, USA,
|
| 10 |
+
|
| 11 |
+
4Consejo Nacional de Humanidades, Ciencias y Tecnologias, Mexico
|
| 12 |
+
|
| 13 |
+
# Abstract
|
| 14 |
+
|
| 15 |
+
Zero-Shot Cross-lingual Transfer (ZS-XLT) utilizes a model trained in a source language to make predictions in another language, often with a performance loss. To alleviate this, additional improvements can be achieved through subsequent adaptation using examples in the target language. In this paper, we exploit In-Context Tuning (ICT) for One-Shot Cross-lingual transfer in the classification task by introducing In-Context Cross-lingual Transfer (IC-XLT). The novel concept involves training a model to learn from context examples and subsequently adapting it during inference to a target language by prepending a One-Shot context demonstration in that language. Our results show that IC-XLT successfully leverages target-language examples to improve the cross-lingual capabilities of the evaluated mT5 model, outperforming prompt-based models in the Zero and Few-shot scenarios adapted through finetuning. Moreover, we show that when source-language data is limited, the fine-tuning framework employed for IC-XLT performs comparably to prompt-based fine-tuning with significantly more training data in the source language.
|
| 16 |
+
|
| 17 |
+
# 1 Introduction
|
| 18 |
+
|
| 19 |
+
The recent progress in the development of multilingual Language Models (LMs) has allowed for effective cross-lingual transfer (XLT) with minimal need for architectural modifications (Pires et al., 2019; Xue et al., 2020). By simply training a multilingual model in a language with abundant resources its acquired knowledge can be extended to target languages, in either Zero-Shot or Few-Shot scenarios. Cross-lingual transfer is a significant topic as it addresses the prevalent challenge of data scarcity in languages other than widely resourced ones, such as English (Joshi et al., 2020). The ability to leverage the extensive linguistic resources available in high-resource languages for languages
|
| 20 |
+
|
| 21 |
+
with limited training data enables the deployment of truly inclusive NLP systems.
|
| 22 |
+
|
| 23 |
+
Zero-Shot Cross-lingual Transfer (ZS-XLT) involves transferring a model trained in a source language to a target language without any demonstration of target-language examples (Chen et al., 2021; Pires et al., 2019). This approach is highly modular, as it requires no adaptations specific to the target language. On the other hand, Few-Shot Cross-lingual Transfer (FS-XLT) enhances target-language accuracy by further fine-tuning a model using labeled target data (Lauscher et al., 2020; Zhao et al., 2021; Schmidt et al., 2022). However, this method faces limitations when the available target-language data is limited, especially for higher-level tasks, leading to negligible enhancements or even detrimental effects on performance (Lauscher et al., 2020; Zhao et al., 2021). Furthermore, this approach incurs higher computational costs due to the fine-tuning step and diminishes the modularity that characterizes the Zero-Shot method.
|
| 24 |
+
|
| 25 |
+
Our perspective is that adapting to a target language should prioritize resource efficiency and modularity, where we can seamlessly deploy a single model trained in English (or another source language) across different languages without any fine-tuning. In this work, we aim to improve this aspect for text classification – a high-level task – by leveraging the language-specific abilities of a multilingual model by prepending a One-Shot text-label target-language demonstration to the input text to predict the correct label. Specifically, we propose In-Context Cross-lingual transfer (IC-XLT), a simple yet effective method for One-Shot Cross-Linguial Transfer in Text Classification.
|
| 26 |
+
|
| 27 |
+
This novel approach employs In-Context Tuning (ICT) (Chen et al., 2022) to train an encoder-decoder model in the source language tasking it to predict input texts with information derived from context demonstrations. ICT is a meta
|
| 28 |
+
|
| 29 |
+
learning strategy that optimizes a model's ability to learn from in-context examples, originally designed for facilitating swift adaptation to new tasks by prepending target-task in-context demonstrations to the input during the adaptation process. To the best of our knowledge, this is the first study of ICT application in the context of cross-lingual transfer.
|
| 30 |
+
|
| 31 |
+
The proposed method is composed of a fine-tuning and an adaptation stage. First, we fine-tune on the source language through ICT, where the model is trained for the classification task and also to learn from context demonstrations. Then, we adapt to the target language at inference time by prepending a One-Shot<sup>1</sup> demonstration in that language to the input. This method is modular and cost-effective at the adaptation stage as it does not require any gradient update.
|
| 32 |
+
|
| 33 |
+
We evaluate IC-XLT on two multilingual text classification datasets, spanning 5 and 13 target languages, with English as the source language. We consider two distinct settings. First, we assume access to the entire source-language training dataset. For the second setting, we deliberately constrain the amount of source training data available. This limitation aims to gauge the robustness of the proposed approach in scenarios where the availability of source data is restricted. We hypothesize that leveraging context information may prove particularly beneficial in tasks where source data is limited.
|
| 34 |
+
|
| 35 |
+
The contributions of this work are the following:
|
| 36 |
+
|
| 37 |
+
1. IC-XLT as an effective strategy for One-Shot Cross-lingual transfer: By measuring the reduction in the transfer gap of IC-XLT against standard approaches, and the performance improvement after introducing target-language examples, we present empirical evidence that training a model in a source language with In-Context Tuning allows it to leverage a One-Shot demonstration through In-Context Learning to adapt to a target language. This results in a One-Shot XLT approach that adapts to a target language at inference without requiring gradient updates.
|
| 38 |
+
|
| 39 |
+
2. ICT improves mT5 fine-tuning when resources are limited. We observe that for the evaluated tasks, ICT training yields better performance compared to traditional fine-tuning
|
| 40 |
+
|
| 41 |
+
when (source language) training data consists of only few-shots per label. In particular IC-XLT models trained on this scenario (1) benefit from this behavior at the adaptation and (2) leverage target-language in-context examples, achieving comparable performance to Prompt Tuning transfer methods with significantly less source-language data.
|
| 42 |
+
|
| 43 |
+
# 2 Related work
|
| 44 |
+
|
| 45 |
+
# 2.1 Zero and Few-Shot Cross-lingual Transfer
|
| 46 |
+
|
| 47 |
+
Multilingual transformers, such as mBERT (Devlin et al., 2018), XLMR (Conneau et al., 2019), and mT5 (Xue et al., 2020), have showcased notable ability in Zero-Shot Cross-lingual Transfer (ZS-XLT) (Pires et al., 2019). In this paradigm, these models are trained using abundant data in a source language and subsequently undergo evaluation in a target language without exposure to any training data in that specific language. However, this methodology is susceptible to significant performance variance (Keung et al., 2020), and the transfer performance gap is contingent upon the linguistic proximity between the source and target languages (Pires et al., 2019).
|
| 48 |
+
|
| 49 |
+
Furthermore, recent studies indicate that incorporating a small number of annotated examples in the target language can mitigate the performance gap between the source and target languages (Lauscher et al., 2020; Zhao et al., 2021; Schmidt et al., 2022). This methodology, termed Few-Shot Cross-Lingual Transfer (FS-XLT), involves first fine-tuning a model on an extensive source dataset (as in ZS-XLT), and then subjecting it to a second fine-tuning on the reduced target-language data, facilitating its adaptation to this target language. This approach yields a noticeable improvement in performance at a relatively low labeling cost across various NLP tasks (Lauscher et al., 2020).
|
| 50 |
+
|
| 51 |
+
However, empirical evidence (Lauscher et al., 2020; Zhao et al., 2021; Schmidt et al., 2022) indicates that FS-XLT yields the most significant benefits for low-level tasks, such as Named Entity Recognition (NER) or Part-of-Speech (POS) tagging. When applied to high-level tasks, like Natural Language Inference (NLI) or text classification, the performance only shows improvement with a substantial number of examples from the target language. When adapted with small datasets in the target language (<100 samples), FS-XLT tends to offer minimal performance gains or may even
|
| 52 |
+
|
| 53 |
+
lead to a decline in model efficacy.
|
| 54 |
+
|
| 55 |
+
Additionally, according to Schmidt et al. (2022), sequential FS-XLT can also exhibit unreliability in the Few-Shot scenario due to considerable variance in performance at different checkpoints during training. To address this issue, they propose jointly training the model using both source and target data in the adaptation stage of the process, which improves stability in the Few-Shot setting. This fine-tuned FS-XLT approach, however, has two notable drawbacks. Firstly, it lacks modularity, as the models are trained specifically for the selected target language during the adaptation stage. Secondly, there is a substantial increase in computational cost compared to Zero-Shot Cross-lingual Transfer due to the adaptation fine-tuning, whose cost scales with the size of the base model.
|
| 56 |
+
|
| 57 |
+
Moreover, existing methods predominantly address the XLT task under the assumption of abundant data in the source language. Although this is a fair assumption for many cases, as in general it is much more likely to find labeled datasets in high resource languages, there are scenarios where the source domain itself is limited. Instances of this include domain-specific tasks with a scarcity of annotated samples or tasks related to rapidly emerging trends and language patterns originated from social media, where due to its emerging nature, there is no labeled data available. In such cases, it might be more feasible to secure annotations for high-resource languages, which can then be transferred to other languages.
|
| 58 |
+
|
| 59 |
+
Given these considerations, we believe it is pertinent to investigate how the XLT performance scales as the quantity of available source data is systematically reduced. The intuition behind this is that the introduction of target-language shots may alleviate the performance decrease associated with a reducing source training data.
|
| 60 |
+
|
| 61 |
+
# 2.2 In-Context Learning and Language Models
|
| 62 |
+
|
| 63 |
+
LMs have demonstrated an aptitude for learning from a small number of demonstrations through a method known as In-Context Learning (ICL) (Brown et al., 2020), where the model is tasked with predicting an input prepended with labeled examples. Particularly, (Winata et al., 2021) observed that it is possible to achieve satisfactory performance in a cross-lingual setting when evaluating an mT5 model with a target-language input prefixed with labeled English demonstrations.
|
| 64 |
+
|
| 65 |
+
This Zero-Shot approach, although efficient, can be sub-optimal as it does not take full advantage of resources in the source language due to the lack of fine-tuning.
|
| 66 |
+
|
| 67 |
+
Recent findings indicate that transformers (Vaswani et al., 2017) can perform model selection on functions encountered during pre-training through in-context demonstrations. Yet, they still find it challenging in generalizing effectively to out-of-distribution classes, as highlighted by Yadlowsky et al. (2023). Given that most pre-trained LMs have not been explicitly trained for ICL, they might exhibit sub-optimal behavior when presented with Few-Shot demonstrations. In response to this challenge, Chen et al. (2022) introduced In-Context Tuning (ICT), a meta-learning approach designed to train a model to effectively learn from in-context demonstrations<sup>2</sup>. ICT meta-trains a language model across a range of tasks, enhancing its ability to swiftly adapt to new tasks through ICL.
|
| 68 |
+
|
| 69 |
+
Still, In-Context Tuning has not yet been implemented for language transfer, as opposed to task transfer. We hypothesize that fine-tuning a multilingual model concurrently for learning from input context and the downstream task can leverage multilingual knowledge acquired during pretraining. This, we anticipate, will result in enhanced classification performance in a target language when provided with examples in that language. Therefore, in this study we showcase the efficacy of this idea for One-Shot Cross-lingual Transfer, particularly, for adapting to a target language through a One-Shot demonstration in-context. This adaptation method proves effective in improving the text classification performance by better leveraging target-language examples compared to the fine-tuned FS-XLT. Moreover, we delve into the advantages of employing this approach in scenarios where source task data is not abundant.
|
| 70 |
+
|
| 71 |
+
# 3 Our proposed approach: In-Context Cross-Linguual Transfer
|
| 72 |
+
|
| 73 |
+
Our method aims to simultaneously train a pretrained multilingual encoder-decoder model for (1) a downstream text classification task, and (2) learning from context demonstrations. Then, we expect it to be able to generate predictions in a target language by including context demonstrations in this
|
| 74 |
+
|
| 75 |
+
language. Therefore, we leverage ICT fine-tuning to transfer between languages at inference. As described above, our proposed procedure, called In-Context Cross-lingual Transfer (IC-XLT), is comprised of two stages:
|
| 76 |
+
|
| 77 |
+
In-Context Tuning During the meta-training stage, we fine-tune the base multilingual model for a specific task using data from the source language. Let the set of pairs $D^{src} = \{(x_{1}^{src}, y_{1}^{src}), \ldots, (x_{|D|}^{src}, y_{|D|}^{src})\}$ represent the source-language training dataset. The objective is to train the model to predict the label $y_{i}^{src}$ for a given text $x_{i}^{src}$ with the following input $\Rightarrow$ output format:
|
| 78 |
+
|
| 79 |
+
$$
|
| 80 |
+
X ^ {s r c}, x _ {i} ^ {s r c} \Rightarrow y _ {i} ^ {s r c}
|
| 81 |
+
$$
|
| 82 |
+
|
| 83 |
+
Here, $X^{src} = ((x_{j_1},y_{j_1}),\ldots ,(x_{j_M},y_{j_M}))$ is a random sequence of $M$ text-label pairs randomly sampled from $D^{src}$ without replacement, which excludes the pair $(x_i^{src},y_i^{src})$ . In simpler terms, $M$ is the number of source-language demonstrations prepended to each input during fine-tuning, not the number of $K_{tgt}$ -shot examples prepended during inference.
|
| 84 |
+
|
| 85 |
+
In-Context Learning At inference, we adapt to a target language by prepending the samples from the target language training dataset $\widetilde{D}^{tgt} = \{(\widetilde{x}_1^{tgt},\widetilde{y}_1^{tgt}),\ldots ,(\widetilde{x}_{NK_{tgt}}^{tgt},\widetilde{y}_{NK_{tgt}}^{tgt})\}$ to each entry $x_{i}^{tgt}$ of the test set to predict $y_{i}^{tgt}$ . Consequently, the input format mirrors the structure observed in the ICT stage:
|
| 86 |
+
|
| 87 |
+
$$
|
| 88 |
+
\widetilde {X} ^ {t g t}, x _ {i} ^ {t g t} \Rightarrow y _ {i} ^ {t g t}
|
| 89 |
+
$$
|
| 90 |
+
|
| 91 |
+
Where $N$ is the number of classes and the sequence $\widetilde{X}^{tgt}$ is a concatenation of $\widetilde{D}^{tgt}$ entries comprising the $K_{tgt}$ -shot samples per class, prepended to each $x_{i}^{tgt}$ entry at the inference stage.
|
| 92 |
+
|
| 93 |
+
The intuitive idea for this approach is that, after the meta-training stage, we expect the model to understand both the classification task and the contextual relationships relevant to it. During the adaptation stage, the model leverages its multilingual pretraining to interpret context examples in the target language. Note that the adaptation to the target language in this context is gradient free, as it occurs during inference, and thus no model weights are updated.
|
| 94 |
+
|
| 95 |
+
# 4 Experimental Methodology
|
| 96 |
+
|
| 97 |
+
In this section, we outline the methodology employed to evaluate the proposed approach. We as
|
| 98 |
+
|
| 99 |
+
sess IC-XLT effectiveness in adapting to a target language for the classification task and compare its performance in cross-lingual transfer under (1) full training data on the source language and (2) various source-language data budgets. We conduct these limited data experiments to assess how much IC-XLT improves over a traditional fine-tuning method by leveraging the One-Shot demonstration.
|
| 100 |
+
|
| 101 |
+
# 4.1 Data and Evaluation Metrics
|
| 102 |
+
|
| 103 |
+
We conduct evaluations on two multilingual text classification datasets. The first dataset is Aspect Category Detection (ACD) on Restaurant Reviews (Pontiki et al., 2016), a multi-label dataset comprising 12 classes representing different aspects mentioned in reviews. The second dataset is Domain Classification on assistant utterances from the MASSIVE dataset (FitzGerald et al., 2022), a single-label classification dataset with 18 possible domain classes. The main difference between these datasets is that MASSIVE assigns only one label per entry, whereas ACD allows for multiple labels per entry, presenting a more challenging task. The datasets were chosen for their larger number of labels and their availability in multiple languages with shared labels (See Appendix A.3 for further details).
|
| 104 |
+
|
| 105 |
+
We select $F_{1}$ micro as our evaluation metric, following Pontiki et al. (2016). For both datasets, our model is trained in English as the source language, and its performance is evaluated across 5 target languages: Dutch, Turkish, Russian, French, and Spanish for ACD, and 13 target languages for MASSIVE: French, Spanish, Turkish, Russian, Thai, Japanese, Indonesian, Icelandic, Amharic, Arabic, Azeri, Swahili and Urdu
|
| 106 |
+
|
| 107 |
+
To evaluate the performance of our proposed In-Context Cross-Lingual Transfer (IC-XLT) approach in a resource-constrained scenario with limited source-language data, we construct synthetically reduced datasets by sampling subsets of the training datasets following various $K$ -shot configurations, specifically $K_{src} \in \{8, 16, 32, 64\}$ . The objective of these evaluations is to assess IC-XLT's ability to leverage target-language demonstrations for enhancing performance in situations where the source-language task has limited resources.
|
| 108 |
+
|
| 109 |
+
Regarding the shot selection, our $K$ -shot approach selects $K$ examples per class. Considering the class imbalance and the multi-label nature of the ACD dataset, the total number of examples will be in the range $[K, K \times N]$ . For a detailed
|
| 110 |
+
|
| 111 |
+
explanation on this refer to Appendix A.1
|
| 112 |
+
|
| 113 |
+
# 4.2 Experimental Setting
|
| 114 |
+
|
| 115 |
+
As our multilingual base model, we utilize mT5-large (Xue et al., 2020), an encoder-decoder model pre-trained on a diverse corpus encompassing over 100 languages. We employ LoRA (Hu et al., 2021) for fine-tuning the model on the source-language data with full training data and varying numbers of shots $K_{src}$ . During the inference stage, label predictions are generated through text generation, which facilitates multi-label inference. We adopt a greedy decoding strategy as implemented in Wolf et al. (2020).
|
| 116 |
+
|
| 117 |
+
In this work, we set $K_{tgt} = 1$ , using only a One-Shot demonstration for the proposed IC-XLT approach<sup>3</sup>. We train the ICT models in the source language with different number of context examples, specifically $M = 10$ and $M = 20$ . All models are trained on an NVIDIA Titan RTX GPU, the hyperparameter selection is discussed in Appendix A.2.1.
|
| 118 |
+
|
| 119 |
+
For the experiments with reduced source-language data, we conduct evaluations using two seeds for each of the following: the fine-tuning process, $K_{src}$ shot selection, and $K_{tgt}$ shot selection. Since Zero-Shot approaches do not require selecting target shots, we run a total of 4 and 8 runs for Zero-Shot and One-Shot respectively, using seeds within $\{1,2\}$ . For the models trained with full source-language data, we trained 5 models with seeds for the fine-tuning process within $\{1,\dots,5\}$ and selected the best 3 in the English validation set. At adaptation, we employ two seeds for $K_{tgt}$ shot selection in $\{1,2\}$ .
|
| 120 |
+
|
| 121 |
+
# 4.3 Baselines
|
| 122 |
+
|
| 123 |
+
We benchmark our proposed approach against the following baseline methods:
|
| 124 |
+
|
| 125 |
+
(1S) One-shot Prediction Leveraging mT5's pretraining objective, we task the model with predicting the missing span corresponding to the correct label given an input text prepended with a One-Shot demonstration. This is a traditional In-Context Learning approach where we expect the model to deduce label meanings from the examples without undergoing source-language fine-tuning, similar to the idea introduced in Winata et al. (2021), serving as the lower bound when $K_{src} = 0$ .
|
| 126 |
+
|
| 127 |
+
(ZS-XLT) Zero Shot XLT The standard Zero-Shot $(K_{tgt} = 0)$ Cross-lingual Transfer approach, where the model is initially trained on a source language, and subsequent inference is conducted on the target language without any additional tuning. In this case, we train the mT5 model through prompt-based fine-tuning (PFT), with the input-output form:
|
| 128 |
+
|
| 129 |
+
$$
|
| 130 |
+
x _ {i} \Rightarrow y _ {i}
|
| 131 |
+
$$
|
| 132 |
+
|
| 133 |
+
Hence, training is performed at the source and inference at target languages.
|
| 134 |
+
|
| 135 |
+
(1S-XLT $\nabla$ and 8S-XLT $\nabla$ ) 1-Shot and 8-Shot XLT Using the same training scheme (PFT), we continue fine-tuning on the checkpoints from ZS-XLT, training with $K_{tgt} = 1$ and $K_{tgt} = 8$ shots per label in the target language. This approach is the standard gradient-based approach for adapting to a target language in Few-Shot Cross-Linguual Transfer (Lauscher et al., 2020). For this baseline, the target-language shots are not prepended as in IC-XLT and 1S, but used to further fine-tune the model following the same input-output as ZS-XLT.
|
| 136 |
+
|
| 137 |
+
$(1S - XLT_{macro}^{\nabla})$ macro-averaging In this baseline, we build upon the methodology established in 1S-XLT with $K_{tgt} = 1$ by adopting a strategy that incorporates fine-tuning on both source and target-language data for adaptation to the target language. This method follows the approach described by Schmidt et al. (2022), where the loss for each batch is calculated using a weighted combination of source and target-language losses:
|
| 138 |
+
|
| 139 |
+
$$
|
| 140 |
+
L = \beta L _ {s r c} + (1 - \beta) L _ {t g t}
|
| 141 |
+
$$
|
| 142 |
+
|
| 143 |
+
Here, $L_{src}$ and $L_{tgt}$ represent the source and target-language losses, respectively. We select a value of $\beta = 0.5$ following the original implementation.
|
| 144 |
+
|
| 145 |
+
(IC- $\mathbf{XLT}_{SRC}$ ) IC-XLT with source-language context We use the same models trained for IC-XLT, however, in this method In-Context examples are drawn from the training set of the source language (English). In essence, this can be considered a Zero-Shot baseline since no target language (or unseen source language) is involved for adaptation. Through this baseline we aim to evaluate the relevance of the target language One-Shot samples at the adaptation stage, assessing whether they are necessary for successful transfer to that target language.
|
| 146 |
+
|
| 147 |
+
# 5 Results and analysis
|
| 148 |
+
|
| 149 |
+
IC-XLT performance at Cross-lingual transfer For the first experiment, we compare our proposed approach, IC-XLT, to the baselines detailed in Section 4.3 using the full training set in the source language. For Aspect Category Detection, we observed a general trend where mT5 models trained with In-Context Tuning, which employs the input-output setting $\tilde{X}, x_i \Rightarrow y_i$ , consistently outperformed models subjected to prompt-based fine-tuning with $x_i \Rightarrow y_i$ under the same training regimes, despite both models being trained for an equivalent number of steps and exact same data instances. We hypothesize that this superior performance may be attributed to the fact that the ICT-trained models see $M$ randomly ordered input-output examples at each instance, even though they are tasked with predicting only $x_i$ .
|
| 150 |
+
|
| 151 |
+
We present the $F_{1}$ micro scores across five different languages on ACD and 13 languages on MASSIVE in Tables 1 and 2 respectively. The numbers in these tables represent average metrics calculated across various seeds, as detailed in Section 4.2. The standard deviation for each method and language is provided in the complete tables located in Appendix A.5. We observe that our proposed approach, In-Context Cross-Linguual Transfer (IC-XLT), effectively outperforms the baselines by a substantial margin in the evaluated datasets, greatly improving mT5 cross-lingual transfer performance. A crucial observation is that for both of the evaluated datasets there is a noticeable increase in performance from IC-XLT $_{SRC}$ to IC-XLT. This means that the proposed approach adapts successfully, during inference time, to the target language by taking advantage of the One-Shot in context demonstration.
|
| 152 |
+
|
| 153 |
+
On the other hand, the 1S-XLT $^{\nabla}$ approach did not improve over ZS-XLT by a considerable margin, which is consistent with previous results for high-level tasks (Lauscher et al., 2020; Zhao et al., 2021). Even when increasing target-language samples to $K_{tgt} = 8$ (baseline 8S-XLT $^{\nabla}$ ) on this gradient-based baseline, we observe a performance decline in the ACD dataset for most languages except Turkish. The MASSIVE dataset shows improvements across nearly all target languages with the increased number of target shots on 8S-XLT $^{\nabla}$ , yet, these gains are modest compared to those achieved with IC-XLT using only $K_{tgt} = 1$ , highlighting its superior efficiency in utilizing minimal
|
| 154 |
+
|
| 155 |
+
target-language data for improved cross-lingual transfer.
|
| 156 |
+
|
| 157 |
+
The gradient-based adaptation that mixes target and source languages ( $1S$ - $XLT_{macro}^{\nabla}$ ) marginally outperforms $1S$ - $XLT^{\nabla}$ in the ACD dataset. However, it does not improve, and even drops, performance on the MASSIVE dataset. This approach also requires more computational resources, due to the larger data volume when including source-language examples.
|
| 158 |
+
|
| 159 |
+
The observed variability in the effectiveness of gradient-based methods across datasets and languages may be related to differences between the datasets; ACD represents a more complex, non-parallel task with classes that are easier to confuse, unlike the parallel and simpler nature of MAS-SIVE. Nonetheless, IC-XLT consistently outperforms the baselines across languages and datasets, demonstrating its robustness and cost-effectiveness in terms of computational resources. We find that $M = 10$ (the number of in-context demonstrations during ICT training) performs slightly better than $M = 20$ in the experiments that employ the full training set.
|
| 160 |
+
|
| 161 |
+
We conduct experiments to quantify the ability of IC-XLT to perform at scenarios with limited source-language resources. For this we evaluate ZS-XLT, 1S-XLT $\nabla$ , IC-XLTSRC, and IC-XLT models trained with $K_{src} \in \{8,16,32,64\}$ . We noticed that models trained with the ICT framework generally perform better compared to PFT for low values of $K_{src}$ . In Figures 1b and 1a, we illustrate the average performance on the target languages at different source-language resource availability regimes. In both Figures, we can observe that IC-XLT makes better use of resources than prompt-based fine-tuning specially at smaller values for $K_{src}$ . Furthermore, the performance difference with the source language (English) is visibly smaller for IC-XLT, more discussion on this can be found below.
|
| 162 |
+
|
| 163 |
+
The $F_{1}$ -micro averages for the target languages are shown in Tables 3 and 4 for ACD and MAS-SIVE, respectively. For all the language-specific performance metrics at different $K_{src}$ budgets refer to Appendix A.5. Results on these tables show that IC-XLT trained on limited data ( $K_{src} = 64$ ) achieves competitive or superior performance compared to baseline methods (ZS-XLT and 1S-XLT $\nabla$ ) trained with full source datasets.
|
| 164 |
+
|
| 165 |
+
<table><tr><td>Method (Ktgt)</td><td>ENG(SRC)</td><td>Target avg</td><td>AMH</td><td>AZE</td><td>ISL</td><td>SWA</td><td>URD</td><td>ARA</td><td>IND</td><td>THA</td><td>TUR</td><td>FRA</td><td>JAP</td><td>RUS</td><td>SPA</td></tr><tr><td>1S (1)</td><td>38.54</td><td>30.48</td><td>24.26</td><td>30.38</td><td>31.19</td><td>32.78</td><td>29.48</td><td>31.15</td><td>32.26</td><td>30.83</td><td>33.24</td><td>33.34</td><td>29.69</td><td>25.80</td><td>31.90</td></tr><tr><td>ZS-XLT (0)</td><td>90.06</td><td>76.12</td><td>62.53</td><td>71.63</td><td>73.77</td><td>65.75</td><td>67.72</td><td>70.98</td><td>84.23</td><td>79.88</td><td>77.7</td><td>84.87</td><td>83.39</td><td>83.98</td><td>83.09</td></tr><tr><td>1S-XLTV(1)</td><td>90.08</td><td>76.21</td><td>62.78</td><td>71.63</td><td>73.9</td><td>65.85</td><td>68.34</td><td>71.11</td><td>84.17</td><td>79.95</td><td>77.68</td><td>84.86</td><td>83.45</td><td>83.89</td><td>83.15</td></tr><tr><td>8S-XLTV(8)</td><td>89.87</td><td>76.78</td><td>65.38</td><td>72.13</td><td>74.73</td><td>66.78</td><td>69.24</td><td>71.45</td><td>84.08</td><td>80.46</td><td>78.02</td><td>85.16</td><td>83.25</td><td>84.17</td><td>83.29</td></tr><tr><td>1S-XLTV( macro) (1)</td><td>89.93</td><td>75.8</td><td>62.53</td><td>71.05</td><td>73.48</td><td>65.42</td><td>67.9</td><td>70.49</td><td>83.8</td><td>79.33</td><td>77.17</td><td>84.71</td><td>82.87</td><td>83.78</td><td>82.86</td></tr><tr><td>IC-XLTM=10(SRC)</td><td>89.41</td><td>74.39</td><td>63.39</td><td>70.82</td><td>69.22</td><td>56.06</td><td>67.47</td><td>71.62</td><td>81.04</td><td>79.02</td><td>76.65</td><td>82.81</td><td>82.99</td><td>84.41</td><td>81.53</td></tr><tr><td>IC-XLTM=20(SRC)</td><td>89.46</td><td>74.02</td><td>61.27</td><td>71.49</td><td>68.83</td><td>53.76</td><td>68.26</td><td>70.77</td><td>81.6</td><td>78.91</td><td>76.36</td><td>83.13</td><td>82.83</td><td>83.76</td><td>81.23</td></tr><tr><td>IC-XLTM=10(1)</td><td>89.41</td><td>81.24</td><td>70.68</td><td>81.73</td><td>81.07</td><td>78.23</td><td>76.34</td><td>77.28</td><td>85.6</td><td>80.98</td><td>83.63</td><td>85.53</td><td>84.68</td><td>86.18</td><td>84.18</td></tr><tr><td>IC-XLTM=20(1)</td><td>89.46</td><td>80.26</td><td>68.09</td><td>81.37</td><td>80.32</td><td>75.54</td><td>76.12</td><td>76.43</td><td>85.46</td><td>81.02</td><td>82.25</td><td>84.92</td><td>83.77</td><td>84.39</td><td>83.65</td></tr></table>
|
| 166 |
+
|
| 167 |
+
Table 1: Average $F_{1}$ micro in the MASSIVE Domain Detection task, trained with full data in English. The number in parenthesis is the amount of samples per label in the target language used for the adaptation process ( $K_{tgt}$ ). Standard deviations over different seeds per each language are shown in Appendix A.5.
|
| 168 |
+
|
| 169 |
+
<table><tr><td>Method (Ktgt)</td><td>ENG(SRC)</td><td>Target avg</td><td>FRA</td><td>NLD</td><td>RUS</td><td>SPA</td><td>TUR</td></tr><tr><td>IS (1)</td><td>37.09</td><td>28.42</td><td>31.78</td><td>20.33</td><td>34.38</td><td>34.86</td><td>20.76</td></tr><tr><td>ZS-XLT (0)</td><td>79.14</td><td>70.15</td><td>67.96</td><td>69.24</td><td>73.6</td><td>70.01</td><td>69.95</td></tr><tr><td>IS-XLTV (1)</td><td>79.16</td><td>70.39</td><td>67.98</td><td>69.56</td><td>73.44</td><td>70.23</td><td>70.72</td></tr><tr><td>8S-XLTV (8)</td><td>78.78</td><td>70.26</td><td>67.87</td><td>68.81</td><td>72.32</td><td>69.84</td><td>72.47</td></tr><tr><td>IS-XLTV macro (1)</td><td>79.16</td><td>70.41</td><td>68.03</td><td>69.59</td><td>73.3</td><td>70.25</td><td>70.88</td></tr><tr><td>IC-XLTV M=0 (0)</td><td>81.48</td><td>73.83</td><td>74.06</td><td>72.54</td><td>77.59</td><td>73.7</td><td>71.27</td></tr><tr><td>IC-XLTV M=20 (0)</td><td>81.76</td><td>73.11</td><td>73.49</td><td>71.89</td><td>77.52</td><td>73.14</td><td>69.51</td></tr><tr><td>IC-XLTV M=10 (1)</td><td>81.48</td><td>75.25</td><td>74.07</td><td>73.34</td><td>78.07</td><td>75.20</td><td>75.59</td></tr><tr><td>IC-XLTV M=20 (1)</td><td>81.76</td><td>75.05</td><td>73.5</td><td>73.00</td><td>78.01</td><td>74.46</td><td>76.26</td></tr></table>
|
| 170 |
+
|
| 171 |
+
Given that the target language adaptation occurs at inference time, the improvement over the Zero-Shot approach comes at no extra computational cost and at a minimal data cost. This allows to achieve good performance with limited computational and data resources.
|
| 172 |
+
|
| 173 |
+
For the experiments with limited source-language data, $M = 20$ achieves a better performance in MASSIVE. We believe that since ACD contains only 12 labels, in this scenario a context length of 20 will inevitably prepend more repeated context examples than the MASSIVE dataset when training with limited data. This reduced variability may hurt the model's performance compared to $M = 10$ .
|
| 174 |
+
|
| 175 |
+
Measuring the transfer gap with the source language. By measuring the performance gap between the source language and the target language, we aim to quantify the contribution of the ICT training framework and target-language demonstrations for mitigating this gap. As we provide the model with target-language examples, we anticipate a smaller decrease in performance from the source language when evaluating on a new language, compared to Zero-Shot approaches. We can measure this by computing the average transfer gap $\bar{\Delta}\%$ , which is the average percentage decrease
|
| 176 |
+
|
| 177 |
+
Table 2: Average $F_{1}$ micro in the Aspect Category Detection dataset, trained with full data in English, the source language. Standard deviations over different seeds per each language are shown in Appendix A.5.
|
| 178 |
+
|
| 179 |
+
<table><tr><td>Ksrc</td><td colspan="4">1S</td></tr><tr><td>0</td><td colspan="4">28.42</td></tr><tr><td></td><td>ZS-XLT</td><td>1S-XLT▼</td><td>IC-XLTM=10</td><td>IC-XLTM=20</td></tr><tr><td>8</td><td>25.40</td><td>27.12</td><td>33.34</td><td>16.64</td></tr><tr><td>16</td><td>30.84</td><td>32.63</td><td>48.66</td><td>47.04</td></tr><tr><td>32</td><td>43.56</td><td>43.36</td><td>58.91</td><td>61.00</td></tr><tr><td>64</td><td>55.15</td><td>53.85</td><td>65.64</td><td>65.28</td></tr><tr><td>Full</td><td>70.15</td><td>70.39</td><td>75.25</td><td>75.05</td></tr></table>
|
| 180 |
+
|
| 181 |
+
Table 3: Average $F_{1}$ micro across 5 target languages for Aspect Category Detection.
|
| 182 |
+
|
| 183 |
+
<table><tr><td>Ksrc</td><td colspan="4">1S</td></tr><tr><td>0</td><td colspan="4">30.48</td></tr><tr><td></td><td>ZS-XLT</td><td>1S-XLTV</td><td>IC-XLTM=10</td><td>IC-XLTM=20</td></tr><tr><td>8</td><td>44.05</td><td>45.86</td><td>62.46</td><td>66.29</td></tr><tr><td>16</td><td>51.13</td><td>51.97</td><td>70.30</td><td>73.14</td></tr><tr><td>32</td><td>64.74</td><td>65.28</td><td>73.27</td><td>76.72</td></tr><tr><td>64</td><td>63.45</td><td>63.68</td><td>77.55</td><td>79.09</td></tr><tr><td>Full</td><td>76.12</td><td>76.21</td><td>81.24</td><td>80.26</td></tr></table>
|
| 184 |
+
|
| 185 |
+
Table 4: Average $F_{1}$ micro across 13 target languages for MASSIVE (Domain Classification).
|
| 186 |
+
|
| 187 |
+
in performance relative to the evaluations on the test set in the source language (English). This is defined as:
|
| 188 |
+
|
| 189 |
+
$$
|
| 190 |
+
\bar {\Delta} \% = 100 \times \mathbf {E} \left[ P _ {T L} / P _ {S L} - 1 \right]
|
| 191 |
+
$$
|
| 192 |
+
|
| 193 |
+
Where $P_{TL}$ and $P_{SL}$ represent the evaluation performance of the exact same method on the target and source-language test sets, respectively. The performance gap values are shown in Figure 2. We can observe that for most source-language data budgets, we obtain a reduced average transfer gap $\bar{\Delta}\%$ through IC-XLT compared to ZS-XLT, 1S-XLT $\nabla$ and IC-XLT $_{SRC}$ .
|
| 194 |
+
|
| 195 |
+
Improvement due to target-language demonstrations. Aiming to assess the proposed method's capacity to utilize demonstrations in the target language to enhance performance in that language, we compute the average percentage improvement in performance $\delta \%$ after introducing target-language
|
| 196 |
+
|
| 197 |
+

|
| 198 |
+
(a) MASSIVE performance with different sourc data availability. IC-XLT trained with $M = 10$
|
| 199 |
+
Figure 1: Comparison of IC-XLT and 1S-XLT $\nabla$ performance at different source-language data budgets. Pink lines employ target-language examples for adaptation while blue lines do not. We can observe that, in general, the IC-XLT models yield better performance compared to ZS-XLT and 1S-XLT $\nabla$ . This is especially notable at lower resource scenarios.
|
| 200 |
+
|
| 201 |
+

|
| 202 |
+
(b) ACD performance with different sourc data availability. IC-XLT trained with $M = 10$
|
| 203 |
+
|
| 204 |
+
<table><tr><td colspan="2"></td><td colspan="6"><5 B</td><td colspan="4">5-100 B</td><td colspan="4">>100B</td></tr><tr><td></td><td>Method (Ktgt)</td><td>Target avg</td><td>AMH</td><td>AZE</td><td>ISL</td><td>SWA</td><td>URD</td><td>ARA</td><td>IND</td><td>TUR</td><td>THA</td><td>FRA</td><td>JAP</td><td>RUS</td><td>SPA</td></tr><tr><td rowspan="2">δ%</td><td>8S-XLT▼ (8)</td><td>0.87</td><td>4.56</td><td>0.7</td><td>1.3</td><td>1.57</td><td>2.24</td><td>0.66</td><td>-0.18</td><td>0.41</td><td>0.73</td><td>0.34</td><td>-0.17</td><td>0.23</td><td>0.24</td></tr><tr><td>IC-XLT (1)</td><td>9.81</td><td>11.5</td><td>15.41</td><td>17.12</td><td>39.55</td><td>13.15</td><td>7.9</td><td>5.63</td><td>9.11</td><td>2.48</td><td>3.28</td><td>2.04</td><td>2.1</td><td>3.25</td></tr></table>
|
| 205 |
+
|
| 206 |
+
Table 5: $\delta \%$ in MASSIVE. The first row refers to the number of tokens of the target languages in mT5 pretraining corpora. We use IC-XLT with $M = {10}$ .
|
| 207 |
+
|
| 208 |
+
<table><tr><td></td><td>Method (Ktql)</td><td>Target avg</td><td>FRA</td><td>NLD</td><td>RUS</td><td>SPA</td><td>TUR</td></tr><tr><td rowspan="2">δ%</td><td>1S-XLT▼ (1)</td><td>0.34</td><td>0.03</td><td>0.46</td><td>-0.22</td><td>0.31</td><td>1.10</td></tr><tr><td>IC-XLT (1)</td><td>1.92</td><td>0.01</td><td>1.1</td><td>0.62</td><td>2.04</td><td>6.06</td></tr></table>
|
| 209 |
+
|
| 210 |
+
Table 6: $\delta \%$ in the Aspect Category Detection dataset. In here we use IC-XLT with $M = 10$ .
|
| 211 |
+
|
| 212 |
+
shots. The formula for computing this value is described in Appendix A.4.
|
| 213 |
+
|
| 214 |
+
For the MASSIVE dataset, since 1S-XLT $\nabla$ offers minimal improvements with $K_{tgt} = 1$ , we conduct a comparison between IC-XLT with $K_{tgt} = 1$ and 8S-XLT $\nabla$ with $K_{tgt} = 8$ .
|
| 215 |
+
|
| 216 |
+
Values for $\delta \%$ are detailed in Tables 6 and 5 for the Aspect Category Detection (ACD) and MASSIVE datasets, respectively. We observe that IC-XLT consistently achieves the highest $(\delta \%)$ compared to the fine-tuned approaches. This result underscores IC-XLT's effectiveness for Few-Shot Cross-lingual text classification, a high-level task where methods that involve fine-tuning for language adaptation underperform when the amount of target-language data is limited.
|
| 217 |
+
|
| 218 |
+
<table><tr><td></td><td>mT5 pretraining</td><td>Ling. Dist. vs ENG</td></tr><tr><td>IC-XTLSRC</td><td>-0.786**</td><td>0.232</td></tr><tr><td>8S-XTLV</td><td>-0.874**</td><td>0.331</td></tr></table>
|
| 219 |
+
|
| 220 |
+
Table 7: Improvement per language correlation (δ%) with Linguistic Distance and language representation in mT5 pretraining corpora (MASSIVE dataset).
|
| 221 |
+
|
| 222 |
+
Correlation of methods, pretraining data and linguistic distance. By analyzing results is the MASSIVE dataset across the 13 target languages, we observe that by introducing target-language demonstrations some languages, such as French, Japanese, or Russian, show modest enhancements of around $2\%$ , while others, like Azeri, Icelandic, and Swahili, benefit from increases exceeding $15\%$ . To understand the underlying factors contributing to these differences, we explore the relationship between the improvement observed $(\delta\%)$ and two variables: (1) the number of tokens representing each language in mT5 pretraining corpora (Xue et al., 2020), and (2) the linguistic distance between each target language and English, the source lan
|
| 223 |
+
|
| 224 |
+

|
| 225 |
+
Figure 2: The average transfer gap $\bar{\Delta}\%$ of IC-XLT, IC- $\mathrm{XLT}_{\mathrm{SRC}}$ , $1\mathrm{S - XLT}^{\nabla}$ and ZS-XLT at different source-language data budgets. (IC-XLT $M = 10$ ). We can observe that, for most cases, IC-XLT yields the smallest drop in performance after transferring to a target language compared to the baselines.
|
| 226 |
+
|
| 227 |
+

|
| 228 |
+
|
| 229 |
+
guage, according to the URIEL database (Malaviya et al., 2017). The analysis is limited to the MAS-SIVE dataset because the ACD dataset includes only 5 target languages, mostly European and all well-represented in mT5 pretraining data, with no languages having low representation. This limits the scope for obtaining reliable correlation measures.
|
| 230 |
+
|
| 231 |
+
To quantitatively assess these relationships, we measure the Spearman Correlation to identify how the token counts of pretraining data and the linguistic proximity to English correlate with the effectiveness of target-language demonstrations in improving cross-lingual transfer performance. From the correlations presented in Table 7 we can observe that both IC-XLT and $8\mathrm{S - XLT}^{\nabla}$ exhibit a statistically significant negative Spearman correlation between the improvement $(\delta \%)$ and the representation of target languages in the mT5 pretraining corpora. This pattern indicates that languages with less representation in the pretraining data will experience more substantial improvements through target-language adaptation. Correlation with linguistic distance, on the other hand, is weaker and not statistically significant.
|
| 232 |
+
|
| 233 |
+
# 6 Conclusion
|
| 234 |
+
|
| 235 |
+
In this paper, we investigated the application of In-Context Tuning for One-Shot Cross-lingual transfer, introducing In-Context Cross-lingual Transfer (IC-XLT). Our evaluations conducted on a multilingual encoder-decoder model (mT5) demonstrate the efficacy of the proposed method in effectively adapting at inference time to target languages using only a One-Shot demonstration in-context, all with
|
| 236 |
+
|
| 237 |
+
out incurring additional computational expenses (gradient free). Furthermore, in comparison to ZS-XLT and 1S-XLT $^{\nabla}$ , IC-XLT demonstrated superior performance and smaller transfer gap for the task of text classification, a high-level task where FS-XLT tends to underperform.
|
| 238 |
+
|
| 239 |
+
In scenarios with limited source-language training data, we provide empirical evidence that IC-XLT learns better the source language at the metatraining stage and demonstrates a smaller transfer gap at the adaptation stage with the One-Shot demonstration, compared to ZS-XLT and 1S-XLT $^{\nabla}$ . This makes IC-XLT a valuable tool for cross-lingual transfer in resource-limited scenarios. Our findings also show a significant correlation between the performance improvements in target languages and their token count in the mT5 pretraining corpus, indicating that languages with lesser representation tend to benefit more from target-language adaptation through IC-XLT.
|
| 240 |
+
|
| 241 |
+
To our knowledge, this study represents the first exploration of In-Context Tuning for Cross-Linguual Transfer. For future work, we aim to explore the potential and limitations of this approach by evaluating its applicability to other architectures, such as decoder-only or encoder-only models, and examining the impact of training with a greater number of examples in-context.
|
| 242 |
+
|
| 243 |
+
# 7 Limitations
|
| 244 |
+
|
| 245 |
+
In this study, we implement our approach using an mT5-large encoder-decoder model. However, an evaluation of its applicability to encoder-only or decoder-only models remains unexplored and it is left for future work. Furthermore, due to storage
|
| 246 |
+
|
| 247 |
+
and compute constraints and the need to conduct experiments across diverse seeds and training data budgets, we opted to fine-tune the models using LoRA (Hu et al., 2021). While some variability compared to the fully trained model is expected with this architectural choice, empirical evidence from Hu et al. (2021) suggests that its impact is minimal. In this work, we do not compare with methods that translate the source-language training set into target languages. Such approaches require a separate machine translation system and thus are more expensive, falling beyond the scope of our research. Our focus remains on utilizing a single model in an end-to-end manner. Finally, it is important to outline that due to the maximum input length of mT5 (1024 tokens), scaling IC-XLT is to a larger number of target-language shots (e.g $K_{tgt} \in \{4,8,16\}$ ) may prove difficult using the current approach. This challenge is particularly pronounced in scenarios with a substantial number of labels, where input text may need to be truncated. Consequently, there is a need to devise a strategy to either reduce input length or integrate information from different example batches in order to address this limitation.
|
| 248 |
+
|
| 249 |
+
# Acknowledgements
|
| 250 |
+
|
| 251 |
+
We thank the anonymous reviewers and the meta reviewer at ARR for their valuable feedback. Also, we thank CONAHCYT for the computer resources provided through the INAOE Supercomputing Laboratory's Deep Learning Platform for Language Technologies and CIMAT Bajio Super-computing Laboratory (300832). Sanchez-Vega acknowledges CONAHCYT for its support through the program "Investigadoras e Investigadores por México" (Project ID.11989, No.1311). Villa-Cueva (CVU 1019520) thanks CONAHCYT for the support through the master's degree scholarship at CIMAT.
|
| 252 |
+
|
| 253 |
+
# References
|
| 254 |
+
|
| 255 |
+
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei.
|
| 256 |
+
|
| 257 |
+
2020. Language models are few-shot learners. CoRR, abs/2005.14165.
|
| 258 |
+
Guanhua Chen, Shuming Ma, Yun Chen, Li Dong, Dongdong Zhang, Jia Pan, Wenping Wang, and Furu Wei. 2021. Zero-shot cross-lingual transfer of neural machine translation with multilingual pretrained encoders. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 15–26, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
|
| 259 |
+
Yanda Chen, Ruiqi Zhong, Sheng Zha, George Karypis, and He He. 2022. Meta-learning via language model in-context tuning. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 719-730, Dublin, Ireland. Association for Computational Linguistics.
|
| 260 |
+
Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Unsupervised cross-lingual representation learning at scale. CoRR, abs/1911.02116.
|
| 261 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805.
|
| 262 |
+
Jack FitzGerald, Christopher Hench, Charith Peris, Scott Mackie, Kay Rottmann, Ana Sanchez, Aaron Nash, Liam Urbach, Vishesh Kakarala, Richa Singh, Swetha Ranganath, Laurie Crist, Misha Britan, Wouter Leeuwis, Gokhan Tur, and Prem Natarajan. 2022. Massive: A 1m-example multilingual natural language understanding dataset with 51 typologically-diverse languages.
|
| 263 |
+
Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, and Weizhu Chen. 2021. Lora: Low-rank adaptation of large language models. CoRR, abs/2106.09685.
|
| 264 |
+
Pratik Joshi, Sebastin Santy, Amar Budhiraja, Kalika Bali, and Monojit Choudhury. 2020. The state and fate of linguistic diversity and inclusion in the NLP world. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6282-6293, Online. Association for Computational Linguistics.
|
| 265 |
+
Phillip Keung, Yichao Lu, Julian Salazar, and Vikas Bhardwaj. 2020. Don't use English dev: On the zero-shot cross-lingual evaluation of contextual embeddings. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 549-554, Online. Association for Computational Linguistics.
|
| 266 |
+
Anne Lauscher, Vinit Ravishankar, Ivan Vulic, and Goran Glavaš. 2020. From zero to hero: On the limitations of zero-shot language transfer with multilingual Transformers. In Proceedings of the 2020
|
| 267 |
+
|
| 268 |
+
Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4483-4499, Online. Association for Computational Linguistics.
|
| 269 |
+
|
| 270 |
+
Ilya Loshchilov and Frank Hutter. 2017. Fixing weight decay regularization in adam. CoRR, abs/1711.05101.
|
| 271 |
+
|
| 272 |
+
Chaitanya Malaviya, Graham Neubig, and Patrick Littell. 2017. Learning language representations for typology prediction. In *Conference on Empirical Methods in Natural Language Processing* (EMNLP), Copenhagen, Denmark.
|
| 273 |
+
|
| 274 |
+
Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual BERT? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4996-5001, Florence, Italy. Association for Computational Linguistics.
|
| 275 |
+
|
| 276 |
+
Maria Pontiki, Dimitris Galanis, Haris Papageorgiou, Ion Androutsopoulos, Suresh Manandhar, Mohammad AL-Smadi, Mahmoud Al-Ayyoub, Yanyan Zhao, Bing Qin, Orphée De Clercq, Véronique Hoste, Marianna Apidianaki, Xavier Tannier, Natalia Loukachevitch, Evgeniy Kotelnikov, Nuria Bel, Salud María Jiménez-Zafra, and Gülşen Eryigit. 2016. SemEval-2016 task 5: Aspect based sentiment analysis. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 19–30, San Diego, California. Association for Computational Linguistics.
|
| 277 |
+
|
| 278 |
+
Fabian David Schmidt, Ivan Vulic, and Goran Glavaš. 2022. Don't stop fine-tuning: On training regimes for few-shot cross-lingual transfer with multilingual language models. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 10725–10742, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
|
| 279 |
+
|
| 280 |
+
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc.
|
| 281 |
+
|
| 282 |
+
Genta Indra Winata, Andrea Madotto, Zhaojiang Lin, Rosanne Liu, Jason Yosinski, and Pascale Fung. 2021. Language models are few-shot multilingual learners. In Proceedings of the 1st Workshop on Multilingual Representation Learning, pages 1-15, Punta Cana, Dominican Republic. Association for Computational Linguistics.
|
| 283 |
+
|
| 284 |
+
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Huggingface's transformers: State-of-the-art natural language processing.
|
| 285 |
+
|
| 286 |
+
Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2020. mt5: A massively multilingual pre-trained text-to-text transformer. CoRR, abs/2010.11934.
|
| 287 |
+
|
| 288 |
+
Steve Yadlowsky, Lyric Doshi, and Nilesh Tripuraneni. 2023. Pretraining data mixtures enable narrow model selection capabilities in transformer models.
|
| 289 |
+
|
| 290 |
+
Mengjie Zhao, Yi Zhu, Ehsan Shareghi, Ivan Vulic, Roi Reichart, Anna Korhonen, and Hinrich Schütze. 2021. A closer look at few-shot crosslingual transfer: The choice of shots matters. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5751-5767, Online. Association for Computational Linguistics.
|
| 291 |
+
|
| 292 |
+
# A Appendix
|
| 293 |
+
|
| 294 |
+
# A.1 Shot selection
|
| 295 |
+
|
| 296 |
+
Similar to Zhao et al. (2021), with "K-shot" we refer to selecting $K$ examples for each of the $N$ classes. The examples are randomly sampled from the training splits of the datasets. Note that the number of shots per label may not precisely be $K$ due to underrepresented classes in the training set. This holds true for the ACD dataset, where certain classes may have insufficient samples to meet the per-class $K$ value. In such cases, the total number of shots per i-th class is determined as $\min(K, N_i)$ , where $N_i$ represents the total number of samples for the i-th class in the dataset.
|
| 297 |
+
|
| 298 |
+
Furthermore, since the ACD task involves a multi-label dataset, multi-label examples may add to more than one of the $N$ buckets simultaneously. Hence, the total number of examples in a $K$ -shot dataset is in the range $[K, K \times N]$ .
|
| 299 |
+
|
| 300 |
+
# A.2 Hyperparameter selection
|
| 301 |
+
|
| 302 |
+
Here, we outline the hyperparameters utilized for fine-tuning models across the two stages of our pipeline. Initially, we detail the hyperparameters specific to fine-tuning models in the source language for both PFT and ICT methods. Subsequently, we address those employed for the gradient-based XLT methods. The LoRA (Hu et al., 2021) parameters are $r = 16$ , $\alpha = 32$ , with dropout of $10\%$ . For all cases, we employ an AdamW optimizer (Loshchilov and Hutter, 2017) with a linear scheduler and a batch size of 8.
|
| 303 |
+
|
| 304 |
+
# A.2.1 Fine-tuning on source data
|
| 305 |
+
|
| 306 |
+
For the fine-tuning process on both datasets, we explored learning rates within the range of $lr \in$
|
| 307 |
+
|
| 308 |
+
$\{3,4,5,6,7,8,10\} \times 10^{-4}$ , selecting $4\times 10^{-4}$ for as it performed adequately on both datasets in the source language.
|
| 309 |
+
|
| 310 |
+
Regarding the number of epochs for training on the full datasets: for the MASSIVE dataset, we fine-tuned models for 10 epochs under both prompt-based Fine-Tuning (PFT) and In-Context Tuning (ICT) training schemes. For the Aspect Category Detection (ACD) dataset, which is considerably smaller, we extended the training duration to 15 epochs for ICT and 25 epochs for PFT. The decision to train the PFT models for more epochs in the ACD dataset was taken because it underperformed when only 15 epochs were used.
|
| 311 |
+
|
| 312 |
+
For models trained with a reduced quantity of source-language data, we standardized the training process by setting the learning rate to $5 \times 10^{-4}$ for all models across both datasets. Given the similar volume of data in these constrained scenarios, we extended the training period to 35 epochs for both datasets to expose the models to sufficient training data.
|
| 313 |
+
|
| 314 |
+
# A.2.2 Fine-tuned XLT baselines hyperparameter selection.
|
| 315 |
+
|
| 316 |
+
For the fine-tuned XLT baselines, we use the model checkpoint fine tuned in the source language, used for ZS-XLT. Training focuses only on the target language, incorporating only the specified number of target-language shots $K_{tgt}$ , and for 1S-XLTMacro, an equal amount of source-language shots is added.
|
| 317 |
+
|
| 318 |
+
We evaluated the following learning rates $lr \in \{0.5, 1, 5\} \times 10^{-5}$ . We observed that the limited target-language examples often led to overfitting and reduced performance (notably when $K_{tgt} = 1$ ) with a large number of epochs, especially in some languages such as Russian. For the reported results, the selected learning rates and training durations are as follows:
|
| 319 |
+
|
| 320 |
+
- For both 1S-XLT $\nabla$ and 1S-XLT $\nabla_{macro}$ , a learning rate of $5 \times 10^{-5}$ is used, training for 1 epoch for models trained with the full source-language dataset and 5 epochs for those trained with limited source-language data.
|
| 321 |
+
- For the 8S-XLT $^{\nabla}$ baseline, given the increased number of examples, we opt for a learning rate of $1 \times 10^{-5}$ across 10 epochs.
|
| 322 |
+
|
| 323 |
+
All adaptations used a batch size of 8 and a constant scheduler.
|
| 324 |
+
|
| 325 |
+
<table><tr><td></td><td>Train</td><td>Test</td></tr><tr><td>English</td><td>2000</td><td>676</td></tr><tr><td>Spanish</td><td>2070</td><td>881</td></tr><tr><td>French</td><td>1664</td><td>668</td></tr><tr><td>Turkish</td><td>1232</td><td>144</td></tr><tr><td>Russian</td><td>3655</td><td>1209</td></tr><tr><td>Dutch</td><td>1722</td><td>575</td></tr></table>
|
| 326 |
+
|
| 327 |
+
Table 8: Length of the training and test partitions in the Aspect Category Detection Dataset.
|
| 328 |
+
|
| 329 |
+
# A.3 Dataset description
|
| 330 |
+
|
| 331 |
+
In this Section we provide descriptive information of the employed datasets. MASSIVE features parallel language splits, each comprising 11.5k samples in the training partition and 2.97k in the test partition.
|
| 332 |
+
|
| 333 |
+
However, for the Aspect Category Detection dataset, which is non-parallel, the sample counts vary across languages. Detailed information on these counts is presented in Table 8.
|
| 334 |
+
|
| 335 |
+
# A.4 Computing the improvement due to target-language demonstrations
|
| 336 |
+
|
| 337 |
+
By computing $\delta \%$ we aim to measure the average improvement on performance of the evaluated methods after introducing target-language demonstrations. For this we compute the ratio between the model with target-language demonstrations and the Zero-Shot approach. This value is computed using the following formula:
|
| 338 |
+
|
| 339 |
+
$$
|
| 340 |
+
\delta \% = 100 \times \left[ P _ {T L} ^ {\text {few - shoot}} / P _ {T L} ^ {\text {zero - shoot}} - 1 \right]
|
| 341 |
+
$$
|
| 342 |
+
|
| 343 |
+
Where $P_{TL}^{few-shot}$ and $P_{TL}^{zero-shot}$ represent the average evaluation performance of the model under the same training scheme (ICT or PFT), with and without the target-language $(TL)$ examples, respectively. Hence, for prompt-based fine-tuning $P_{TL}^{few-shot}$ is 1S/8S-XLT $\nabla$ and $P_{TL}^{zero-shot}$ is ZS-XLT, while for In-Context Tuning $P_{TL}^{few-shot}$ is IC-XLT and $P_{TL}^{zero-shot}$ is IC-XLT $_{SRC}$ .
|
| 344 |
+
|
| 345 |
+
# A.5 Performance metrics per language of the evaluated method on different source language budgets.
|
| 346 |
+
|
| 347 |
+
This section presents the comprehensive results of our evaluations across different target languages and different source data availability settings of ZS-XLT, 1S-XLT $\nabla$ , and IC-XLT, with English serving as the source language. Detailed performance metrics for cross-lingual transfer on the Aspect Category Detection (ACD) dataset are depicted in
|
| 348 |
+
|
| 349 |
+
Table 9 and results for the MASSIVE dataset are provided in Tables 10 and 11.
|
| 350 |
+
|
| 351 |
+
Additionally, we illustrate the language-wise performance at different $K_{src}$ values on Figures 3 and 4. Furthermore, for the MASSIVE dataset we illustrate the different behavior of groups of languages with different representation in the mT5 pretraining corpus. Figures 5a and 5b illustrate the performance metrics for languages with high representation (over 100 billion tokens) and low representation (under 100 billion tokens) in the mT5 pretraining data, respectively. We also include the Average Transfer Gaps $\Delta \%$ per language at the different source-language budgets in Figures 6 and 7.
|
| 352 |
+
|
| 353 |
+
# A.6 Ethics Statement
|
| 354 |
+
|
| 355 |
+
The proposed method helps to improve downstream cross-lingual performance on languages underrepresented in the multilingual model pretraining. However, we believe that data collection for low-resource languages should continue to be a priority for the NLP research community. It remains very important to better integrate linguistic diversity in multilingual models to avoid technological biases and avoid hindering technological development in societies whose language has a limited number of speakers, or limited funding for developing linguistic resources.
|
| 356 |
+
|
| 357 |
+
On the other hand, we acknowledge the challenge posed by language variants, including regional dialects, which often suffer from underrepresentation. This may result in multilingual models biased towards variants with a larger digital footprint, excluding linguistic features of communities less present in digital spaces. We wish to acknowledge this aspect and warn that it could increase the exclusion of these communities from integration into the digital content ecosystem and information technology tools.
|
| 358 |
+
|
| 359 |
+
# A.7 Licences of systems and datasets
|
| 360 |
+
|
| 361 |
+
In this work, the tools utilized include an mT5 model and the transformers library (Wolf et al., 2020), both of which use the Apache 2.0 license. The MASSIVE dataset, on the other hand, operates under a CC by 4.0 license. As for the Aspect Category Detection dataset, it employs a MS-NC-No ReD license, which limits its usage strictly to an academic scope. Since the aim of this work is to evaluate the performance of a proposed crosslingual system, we adhere to all the licenses of the
|
| 362 |
+
|
| 363 |
+
utilized material.
|
| 364 |
+
|
| 365 |
+
The research presented in this paper is intended for academic purposes, and therefore, we adhere to the licenses governing all utilized materials.
|
| 366 |
+
|
| 367 |
+

|
| 368 |
+
Figure 3: $\mathrm{F}_1$ -micro for each of the 5 evaluated languages in the Aspect Category Detection dataset at different source-language data budget.
|
| 369 |
+
|
| 370 |
+

|
| 371 |
+
Figure 4: $\mathrm{F}_1$ -micro for each of the 13 evaluated languages in the MASSIVE dataset at different source-language data budget.
|
| 372 |
+
|
| 373 |
+
Figure 5: Comparison of IC-XLT and 1S-XLT $^{\nabla}$ performance on the MASSIVE datasets at different source-language data budgets on languages with high representation (left), and with low representation (right) in the base model pretraining.
|
| 374 |
+

|
| 375 |
+
(a) Mean $\mathrm{F_1}$ -micro in the the MASSIVE dataset across the target languages with large representation in mT5 pretraining corpora ( $>100B$ ). In this scenario, the gap between One-Shot (pink) and Zero-Shot (blue) lines is similar as the one observed for ACD in the Figure 1b.
|
| 376 |
+
|
| 377 |
+

|
| 378 |
+
(b) Mean $\mathrm{F_1}$ -micro in the the MASSIVE dataset across the target languages with smaller representation in mT5 pretraining corpora ( $< 100B$ ). We can observe a larger improvement when introducing target-language demonstrations than the one observed for languages well-represented in pretraining (left).
|
| 379 |
+
|
| 380 |
+

|
| 381 |
+
Figure 6: Average Transfer Gap $\bar{\Delta}\%$ per language on the Aspect Category Detection dataset.
|
| 382 |
+
|
| 383 |
+

|
| 384 |
+
|
| 385 |
+

|
| 386 |
+
|
| 387 |
+

|
| 388 |
+
|
| 389 |
+

|
| 390 |
+
|
| 391 |
+

|
| 392 |
+
|
| 393 |
+

|
| 394 |
+
|
| 395 |
+

|
| 396 |
+
|
| 397 |
+

|
| 398 |
+
|
| 399 |
+

|
| 400 |
+
|
| 401 |
+

|
| 402 |
+
|
| 403 |
+

|
| 404 |
+
|
| 405 |
+

|
| 406 |
+
|
| 407 |
+

|
| 408 |
+
|
| 409 |
+

|
| 410 |
+
Figure 7: Average Transfer Gap $\bar{\Delta}\%$ per language on the MASSIVE dataset.
|
| 411 |
+
|
| 412 |
+
<table><tr><td>Ksrc</td><td>ENG</td><td>FRA</td><td>NLD</td><td>RUS</td><td>SPA</td><td>TUR</td></tr><tr><td colspan="7">ZS-XLT (Ktgt = 0)</td></tr><tr><td>8-shot</td><td>29.49±2.06</td><td>26.1±1.46</td><td>21.19±3.74</td><td>29.32±2.84</td><td>31.99±2.46</td><td>18.4±4.34</td></tr><tr><td>16-shot</td><td>33.73±4.12</td><td>32.24±3.63</td><td>28.73±6.76</td><td>33.02±3.22</td><td>34.96±4.48</td><td>25.27±7.68</td></tr><tr><td>32-shot</td><td>49.05±5.33</td><td>45.41±4.68</td><td>40.44±4.68</td><td>45.31±5.9</td><td>43.37±4.73</td><td>43.28±3.93</td></tr><tr><td>64-shot</td><td>60.45±3.3</td><td>55.07±2.46</td><td>53.49±1.88</td><td>56.35±1.23</td><td>53.9±3.4</td><td>56.96±3.02</td></tr><tr><td>Full</td><td>79.14±0.91</td><td>67.96±0.77</td><td>69.24±1.24</td><td>73.6±1.61</td><td>70.01±0.44</td><td>69.95±1.13</td></tr><tr><td colspan="7">1S-XLT^V (Ktgt = 1)</td></tr><tr><td>8-shot</td><td>28.97±1.66</td><td>26.65±1.09</td><td>25.87±1.56</td><td>29.21±0.56</td><td>33.23±1.33</td><td>20.66±1.42</td></tr><tr><td>16-shot</td><td>37.62±4.98</td><td>33.37±1.98</td><td>32.44±4.32</td><td>33.86±2.48</td><td>33.22±4.23</td><td>30.28±2.66</td></tr><tr><td>32-shot</td><td>51.94±4.06</td><td>45.07±4.69</td><td>41.88±2.26</td><td>45.18±5.88</td><td>37.69±4.04</td><td>46.97±3.1</td></tr><tr><td>64-shot</td><td>62.32±2.53</td><td>54.5±3.37</td><td>53.62±2.33</td><td>55.34±1.48</td><td>48.09±3.17</td><td>57.68±2.65</td></tr><tr><td>Full</td><td>79.16±0.78</td><td>67.98±0.74</td><td>69.56±0.63</td><td>73.44±1.55</td><td>70.23±0.47</td><td>70.72±1.43</td></tr><tr><td colspan="7">IC-XLT^M=20 (Ktgt = 0)</td></tr><tr><td>8-shot</td><td>23.66±6.16</td><td>17.3±1.76</td><td>19.9±2.78</td><td>19.05±4.89</td><td>22.01±4.31</td><td>16.09±5.24</td></tr><tr><td>16-shot</td><td>41.19±9.22</td><td>44.39±5.39</td><td>38.9±7.11</td><td>43.25±6.66</td><td>40.82±8.55</td><td>36.22±13.0</td></tr><tr><td>32-shot</td><td>63.25±2.36</td><td>60.92±1.71</td><td>55.11±2.28</td><td>59.7±2.02</td><td>59.69±1.32</td><td>56.42±2.91</td></tr><tr><td>64-shot</td><td>70.01±1.77</td><td>65.67±1.3</td><td>60.08±2.27</td><td>64.09±2.19</td><td>63.55±0.34</td><td>62.23±4.69</td></tr><tr><td>Full</td><td>81.76±0.81</td><td>73.49±0.37</td><td>71.89±0.38</td><td>77.52±0.38</td><td>73.14±0.82</td><td>69.51±1.04</td></tr><tr><td colspan="7">IC-XLT^M=20 (Ktgt = 1)</td></tr><tr><td>8-shot</td><td>23.66±6.16</td><td>17.12±13.05</td><td>15.17±6.96</td><td>16.58±11.33</td><td>13.49±8.87</td><td>20.83±13.2</td></tr><tr><td>16-shot</td><td>41.19±9.22</td><td>48.24±4.75</td><td>44.89±6.13</td><td>49.64±4.95</td><td>47.26±8.59</td><td>45.17±8.6</td></tr><tr><td>32-shot</td><td>63.25±2.36</td><td>61.37±2.49</td><td>58.01±1.44</td><td>62.05±1.53</td><td>61.74±1.3</td><td>61.85±5.97</td></tr><tr><td>64-shot</td><td>70.01±1.77</td><td>65.86±1.28</td><td>63.1±0.79</td><td>65.12±0.96</td><td>65.34±1.18</td><td>66.98±2.66</td></tr><tr><td>Full</td><td>81.76±0.81</td><td>73.5±0.61</td><td>73.0±0.59</td><td>78.01±0.41</td><td>74.46±0.48</td><td>76.26±1.02</td></tr><tr><td colspan="7">IC-XLT^M=10 (Ktgt = 0)</td></tr><tr><td>8-shot</td><td>34.71±7.33</td><td>34.05±9.33</td><td>32.13±6.46</td><td>38.42±5.92</td><td>37.47±7.7</td><td>25.75±6.15</td></tr><tr><td>16-shot</td><td>52.08±11.85</td><td>48.49±10.46</td><td>45.12±11.08</td><td>50.81±8.61</td><td>46.89±11.15</td><td>44.34±16.01</td></tr><tr><td>32-shot</td><td>60.45±8.84</td><td>57.6±5.95</td><td>53.42±6.53</td><td>58.15±5.02</td><td>57.32±5.48</td><td>57.22±10.22</td></tr><tr><td>64-shot</td><td>69.84±1.32</td><td>66.96±1.43</td><td>60.41±0.79</td><td>65.14±1.08</td><td>64.17±1.5</td><td>62.31±2.33</td></tr><tr><td>Full</td><td>81.48±0.37</td><td>74.06±1.03</td><td>72.54±0.82</td><td>77.59±0.85</td><td>73.7±0.72</td><td>71.27±1.17</td></tr><tr><td colspan="7">IC-XLT^M=10 (Ktgt = 1)</td></tr><tr><td>8-shot</td><td>34.71±7.33</td><td>32.24±4.16</td><td>30.62±6.04</td><td>40.4±7.85</td><td>31.04±12.53</td><td>32.41±8.23</td></tr><tr><td>16-shot</td><td>52.08±11.85</td><td>49.71±11.04</td><td>45.86±9.55</td><td>54.83±5.18</td><td>48.69±9.92</td><td>44.2±9.68</td></tr><tr><td>32-shot</td><td>60.45±8.84</td><td>59.38±5.55</td><td>55.23±5.6</td><td>60.21±2.98</td><td>59.06±5.06</td><td>60.66±9.33</td></tr><tr><td>64-shot</td><td>69.84±1.32</td><td>67.2±1.49</td><td>62.32±1.1</td><td>65.32±0.62</td><td>66.33±0.98</td><td>67.04±2.92</td></tr><tr><td>Full</td><td>81.48±0.37</td><td>74.07±0.55</td><td>73.34±0.82</td><td>78.07±0.76</td><td>75.2±1.33</td><td>75.59±2.84</td></tr></table>
|
| 413 |
+
|
| 414 |
+
Table 9: Average per language across the different runs for evaluations under different resource budgets for the Aspect Category Detection dataset. In here, $\pm$ refers to the standard deviation of the performance on the conducted runs.
|
| 415 |
+
|
| 416 |
+
<table><tr><td>Ksrc</td><td>ENG</td><td>AMH</td><td>ARA</td><td>AZE</td><td>FRA</td><td>IND</td><td>ISL</td><td>JAP</td></tr><tr><td colspan="9">ZS-XLT (Ktgt = 0)</td></tr><tr><td>8-shot</td><td>62.93±1.5</td><td>32.53±0.42</td><td>36.81±0.93</td><td>40.3±0.82</td><td>52.11±0.77</td><td>47.24±0.64</td><td>44.9±0.99</td><td>50.96±0.67</td></tr><tr><td>16-shot</td><td>70.52±7.24</td><td>39.27±7.79</td><td>43.38±6.55</td><td>47.55±6.48</td><td>59.71±7.75</td><td>56.42±8.62</td><td>51.42±6.8</td><td>57.27±6.1</td></tr><tr><td>32-shot</td><td>81.72±1.39</td><td>52.4±1.73</td><td>58.98±2.25</td><td>59.58±1.63</td><td>73.72±1.88</td><td>71.08±1.46</td><td>63.67±3.83</td><td>71.65±1.33</td></tr><tr><td>64-shot</td><td>81.71±2.81</td><td>49.68±6.32</td><td>56.19±4.85</td><td>58.49±6.41</td><td>72.78±5.1</td><td>71.34±2.6</td><td>63.23±4.88</td><td>70.85±3.34</td></tr><tr><td>Full</td><td>90.06±0.45</td><td>62.53±0.79</td><td>70.98±1.95</td><td>71.63±1.87</td><td>84.87±0.25</td><td>84.23±0.09</td><td>73.77±0.62</td><td>83.39±0.66</td></tr><tr><td colspan="9">1S-XLT (Ktgt = 1)</td></tr><tr><td>8-shot</td><td>63.3±1.4</td><td>34.26±0.62</td><td>37.67±0.9</td><td>41.96±0.3</td><td>53.41±1.02</td><td>49.6±0.57</td><td>46.48±1.34</td><td>53.53±0.32</td></tr><tr><td>16-shot</td><td>70.52±5.74</td><td>39.33±5.89</td><td>44.65±5.74</td><td>48.4±4.75</td><td>59.49±5.58</td><td>57.42±5.93</td><td>51.4±4.01</td><td>60.13±4.87</td></tr><tr><td>32-shot</td><td>81.7±1.06</td><td>52.87±1.07</td><td>58.56±1.65</td><td>60.66±1.92</td><td>73.61±1.31</td><td>71.58±2.2</td><td>64.26±2.96</td><td>72.56±1.24</td></tr><tr><td>64-shot</td><td>81.17±2.61</td><td>50.49±4.91</td><td>56.37±3.92</td><td>58.99±5.67</td><td>72.6±4.53</td><td>71.11±1.85</td><td>63.67±4.49</td><td>71.46±2.95</td></tr><tr><td>Full</td><td>90.08±0.4</td><td>62.78±0.82</td><td>71.11±1.77</td><td>71.63±1.67</td><td>84.86±0.18</td><td>84.17±0.16</td><td>73.9±0.53</td><td>83.45±0.63</td></tr><tr><td colspan="9">IC-XLT(M=20)SRC(Ktgt = 0)</td></tr><tr><td>8-shot</td><td>73.24±2.71</td><td>51.95±2.84</td><td>56.6±3.29</td><td>58.92±3.18</td><td>65.75±3.04</td><td>64.79±3.74</td><td>59.41±3.12</td><td>66.87±3.61</td></tr><tr><td>16-shot</td><td>82.0±1.37</td><td>57.2±2.99</td><td>62.97±2.75</td><td>65.03±3.15</td><td>74.38±2.1</td><td>71.95±3.03</td><td>65.89±3.72</td><td>73.25±2.3</td></tr><tr><td>32-shot</td><td>85.03±0.52</td><td>62.5±1.5</td><td>68.45±1.87</td><td>69.26±1.61</td><td>78.52±1.14</td><td>77.93±1.89</td><td>70.62±1.53</td><td>77.94±1.11</td></tr><tr><td>64-shot</td><td>87.18±0.66</td><td>63.94±1.44</td><td>70.06±1.4</td><td>71.18±1.89</td><td>81.37±1.14</td><td>80.31±1.44</td><td>71.16±1.45</td><td>80.6±1.19</td></tr><tr><td>Full</td><td>89.46±0.43</td><td>61.27±2.66</td><td>70.77±2.18</td><td>71.49±2.12</td><td>83.13±1.88</td><td>81.6±2.15</td><td>68.83±3.72</td><td>82.83±2.19</td></tr><tr><td colspan="9">IC-XLT(M=20)SRC(Ktgt = 1)</td></tr><tr><td>8-shot</td><td>73.24±2.71</td><td>58.17±3.39</td><td>61.45±3.41</td><td>66.0±3.14</td><td>67.26±3.72</td><td>70.46±3.35</td><td>67.16±2.72</td><td>69.77±3.16</td></tr><tr><td>16-shot</td><td>82.0±1.37</td><td>63.5±3.67</td><td>67.55±1.93</td><td>72.78±1.86</td><td>75.98±1.83</td><td>77.77±1.79</td><td>74.27±2.1</td><td>76.95±1.29</td></tr><tr><td>32-shot</td><td>85.03±0.52</td><td>67.12±2.39</td><td>72.76±1.33</td><td>75.42±1.79</td><td>80.06±1.06</td><td>81.6±1.21</td><td>77.81±1.06</td><td>80.13±0.86</td></tr><tr><td>64-shot</td><td>87.18±0.66</td><td>70.14±1.6</td><td>74.69±0.98</td><td>78.02±1.65</td><td>83.29±0.79</td><td>83.9±0.84</td><td>79.74±1.08</td><td>82.09±0.7</td></tr><tr><td>Full</td><td>89.46±0.43</td><td>68.09±4.07</td><td>76.43±1.87</td><td>81.37±1.99</td><td>84.92±1.4</td><td>85.46±1.62</td><td>80.32±1.1</td><td>83.77±1.72</td></tr><tr><td colspan="9">IC-XLT(M=10)SRC(Ktgt = 0)</td></tr><tr><td>8-shot</td><td>73.36±0.92</td><td>48.95±1.98</td><td>52.93±1.57</td><td>55.65±1.65</td><td>65.77±1.16</td><td>60.89±2.12</td><td>59.64±0.97</td><td>62.74±1.29</td></tr><tr><td>16-shot</td><td>80.54±0.99</td><td>55.99±3.32</td><td>60.75±3.44</td><td>62.65±2.91</td><td>73.17±2.18</td><td>70.08±2.66</td><td>65.99±2.78</td><td>70.86±2.6</td></tr><tr><td>32-shot</td><td>84.22±0.62</td><td>59.47±1.84</td><td>64.83±1.4</td><td>66.48±1.8</td><td>77.22±0.99</td><td>74.92±1.29</td><td>68.99±1.54</td><td>74.61±1.08</td></tr><tr><td>64-shot</td><td>86.75±0.29</td><td>62.92±1.28</td><td>68.55±1.43</td><td>69.91±1.7</td><td>80.21±0.78</td><td>78.86±1.54</td><td>72.37±1.16</td><td>77.97±1.69</td></tr><tr><td>Full</td><td>89.41±0.4</td><td>63.39±3.54</td><td>71.62±2.13</td><td>70.82±3.95</td><td>82.81±1.8</td><td>81.04±2.38</td><td>69.22±3.4</td><td>82.99±1.53</td></tr><tr><td colspan="9">IC-XLT(M=10)SRC(Ktgt = 1)</td></tr><tr><td>8-shot</td><td>73.36±0.92</td><td>51.7±1.88</td><td>57.18±2.67</td><td>61.04±2.01</td><td>67.12±1.62</td><td>67.44±2.98</td><td>66.48±1.45</td><td>64.99±1.72</td></tr><tr><td>16-shot</td><td>80.54±0.99</td><td>60.89±3.56</td><td>65.16±2.89</td><td>68.59±2.67</td><td>74.81±1.81</td><td>75.34±1.74</td><td>73.26±1.39</td><td>73.03±2.16</td></tr><tr><td>32-shot</td><td>84.22±0.62</td><td>61.26±1.67</td><td>66.91±1.4</td><td>70.99±0.92</td><td>80.0±0.73</td><td>79.44±1.07</td><td>75.56±1.17</td><td>76.27±0.44</td></tr><tr><td>64-shot</td><td>86.75±0.29</td><td>66.93±1.62</td><td>72.2±1.22</td><td>74.91±0.94</td><td>82.99±0.78</td><td>83.11±1.18</td><td>79.7±0.67</td><td>80.51±0.78</td></tr><tr><td>Full</td><td>89.41±0.4</td><td>70.68±2.94</td><td>77.28±0.45</td><td>81.73±1.13</td><td>85.53±1.11</td><td>85.6±0.92</td><td>81.07±0.89</td><td>84.68±0.44</td></tr></table>
|
| 417 |
+
|
| 418 |
+
Table 10: Average performance for English, Amharic, Arabic, Azeri, French, Indonesian, Icelandic, and Japanese across the different runs for evaluations under different resource budgets in the MASSIVE Domain Classification Task. In here, $\pm$ refers to the standard deviation of the performance on the conducted runs.
|
| 419 |
+
|
| 420 |
+
<table><tr><td>Ksrc</td><td>RUS</td><td>SPA</td><td>SWA</td><td>THA</td><td>TUR</td><td>URD</td></tr><tr><td colspan="7">ZS-XLT (Ktgt = 0)</td></tr><tr><td>8-shot</td><td>48.8±1.05</td><td>51.05±0.8</td><td>36.46±0.58</td><td>46.05±0.15</td><td>48.24±0.99</td><td>37.22±0.65</td></tr><tr><td>16-shot</td><td>57.1±7.63</td><td>58.39±7.04</td><td>41.78±6.38</td><td>53.49±7.93</td><td>53.6±5.47</td><td>45.26±7.81</td></tr><tr><td>32-shot</td><td>72.15±2.0</td><td>72.12±1.64</td><td>52.09±2.37</td><td>69.0±2.74</td><td>66.26±1.79</td><td>58.88±2.03</td></tr><tr><td>64-shot</td><td>72.11±5.17</td><td>71.83±4.43</td><td>50.36±5.54</td><td>67.6±5.01</td><td>63.97±5.46</td><td>56.4±4.64</td></tr><tr><td>Full</td><td>83.98±0.49</td><td>83.09±0.79</td><td>65.75±2.19</td><td>79.88±2.37</td><td>77.7±1.19</td><td>67.72±0.96</td></tr><tr><td colspan="7">1S-XLT (Ktgt = 1)</td></tr><tr><td>8-shot</td><td>50.65±1.06</td><td>52.26±1.06</td><td>37.31±0.61</td><td>49.75±0.38</td><td>49.86±0.89</td><td>39.4±0.32</td></tr><tr><td>16-shot</td><td>58.79±6.61</td><td>58.37±5.28</td><td>42.03±4.64</td><td>55.46±6.02</td><td>54.14±3.8</td><td>46.04±5.9</td></tr><tr><td>32-shot</td><td>72.77±2.13</td><td>72.65±1.03</td><td>52.88±2.07</td><td>69.34±2.89</td><td>66.52±1.94</td><td>60.4±1.36</td></tr><tr><td>64-shot</td><td>71.95±4.93</td><td>71.64±3.48</td><td>51.26±4.19</td><td>66.78±4.84</td><td>63.44±5.25</td><td>58.09±3.61</td></tr><tr><td>Full</td><td>83.89±0.42</td><td>83.15±0.58</td><td>65.85±1.84</td><td>79.95±1.93</td><td>77.68±1.09</td><td>68.34±0.74</td></tr><tr><td colspan="7">IC-XLT M=20 SRC (Ktgt = 0)</td></tr><tr><td>8-shot</td><td>64.98±3.46</td><td>63.79±3.12</td><td>51.51±4.17</td><td>63.69±3.56</td><td>62.52±3.31</td><td>57.68±2.64</td></tr><tr><td>16-shot</td><td>74.26±2.59</td><td>72.2±2.17</td><td>55.0±4.2</td><td>69.61±2.29</td><td>68.52±2.35</td><td>63.29±2.7</td></tr><tr><td>32-shot</td><td>79.78±0.82</td><td>75.58±1.77</td><td>59.21±3.66</td><td>75.42±1.86</td><td>73.58±1.65</td><td>68.19±1.2</td></tr><tr><td>64-shot</td><td>81.58±1.05</td><td>78.44±1.66</td><td>60.89±2.89</td><td>77.9±1.3</td><td>76.14±1.75</td><td>70.03±0.95</td></tr><tr><td>Full</td><td>83.76±1.68</td><td>81.23±2.55</td><td>53.76±5.18</td><td>78.91±1.62</td><td>76.36±2.95</td><td>68.26±2.45</td></tr><tr><td colspan="7">IC-XLT M=20 (Ktgt = 1)</td></tr><tr><td>8-shot</td><td>69.41±3.01</td><td>67.04±3.46</td><td>65.27±3.1</td><td>66.53±2.65</td><td>70.03±3.31</td><td>63.22±2.82</td></tr><tr><td>16-shot</td><td>77.11±1.51</td><td>75.4±1.5</td><td>71.36±2.28</td><td>72.55±0.81</td><td>76.18±1.43</td><td>69.46±1.98</td></tr><tr><td>32-shot</td><td>81.43±1.16</td><td>78.68±1.2</td><td>74.61±1.26</td><td>76.1±2.19</td><td>78.46±1.28</td><td>73.13±1.19</td></tr><tr><td>64-shot</td><td>83.24±0.8</td><td>81.06±1.33</td><td>76.78±1.47</td><td>79.36±0.97</td><td>80.75±1.52</td><td>75.13±0.97</td></tr><tr><td>Full</td><td>84.39±2.2</td><td>83.65±1.84</td><td>75.54±5.14</td><td>81.02±1.67</td><td>82.25±2.51</td><td>76.12±1.38</td></tr><tr><td colspan="7">IC-XLT M=10 SRC (Ktgt = 0)</td></tr><tr><td>8-shot</td><td>62.66±1.71</td><td>61.93±1.47</td><td>48.24±3.18</td><td>60.16±1.59</td><td>59.48±1.44</td><td>54.15±1.66</td></tr><tr><td>16-shot</td><td>70.82±3.03</td><td>69.05±2.3</td><td>54.17±4.41</td><td>68.4±2.33</td><td>65.58±2.76</td><td>61.64±2.6</td></tr><tr><td>32-shot</td><td>76.48±0.93</td><td>74.15±1.1</td><td>54.89±3.2</td><td>72.85±1.34</td><td>70.25±1.89</td><td>65.48±1.62</td></tr><tr><td>64-shot</td><td>81.02±0.89</td><td>78.2±1.22</td><td>58.61±3.63</td><td>76.28±0.93</td><td>73.64±0.57</td><td>67.74±1.23</td></tr><tr><td>Full</td><td>84.41±0.93</td><td>81.53±2.17</td><td>56.06±3.33</td><td>79.02±1.86</td><td>76.65±2.32</td><td>67.47±3.35</td></tr><tr><td colspan="7">IC-XLT M=10 (Ktgt = 1)</td></tr><tr><td>8-shot</td><td>65.57±2.6</td><td>65.0±1.37</td><td>61.48±1.71</td><td>61.81±2.62</td><td>63.79±2.42</td><td>58.32±2.48</td></tr><tr><td>16-shot</td><td>73.95±2.8</td><td>71.74±2.7</td><td>69.91±2.1</td><td>70.48±2.28</td><td>70.72±2.4</td><td>66.05±2.08</td></tr><tr><td>32-shot</td><td>76.83±0.94</td><td>76.54±0.66</td><td>70.82±1.86</td><td>74.33±1.03</td><td>74.68±0.97</td><td>68.89±1.27</td></tr><tr><td>64-shot</td><td>82.49±0.89</td><td>80.75±1.2</td><td>75.7±2.2</td><td>78.26±0.56</td><td>78.02±0.9</td><td>72.57±1.3</td></tr><tr><td>Full</td><td>86.18±0.39</td><td>84.18±1.54</td><td>78.23±1.87</td><td>80.98±0.48</td><td>83.63±1.1</td><td>76.34±0.61</td></tr></table>
|
| 421 |
+
|
| 422 |
+
Table 11: Average performance for Russian, Spanish, Swahili, Thai, Turkish, and Urdu across the different runs for evaluations under different resource budgets in the MASSIVE Domain Classification Task. In here, $\pm$ refers to the standard deviation of the performance on the conducted runs.
|
2024/Adaptive Cross-lingual Text Classification through In-Context One-Shot Demonstrations/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:95f6e7982dd43b1950d9706014405fc51619236434a9e0877951b14feaf526cc
|
| 3 |
+
size 1824232
|
2024/Adaptive Cross-lingual Text Classification through In-Context One-Shot Demonstrations/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/Adaptive Rank Selections for Low-Rank Approximation of Language Models/d9c0afc9-16d2-4fb1-992a-9475f26df5e4_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/Adaptive Rank Selections for Low-Rank Approximation of Language Models/d9c0afc9-16d2-4fb1-992a-9475f26df5e4_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/Adaptive Rank Selections for Low-Rank Approximation of Language Models/d9c0afc9-16d2-4fb1-992a-9475f26df5e4_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:699c6fd8aae0f4efe60fe0efa1238b6cf806857d67cc548eff57dfb300d3c224
|
| 3 |
+
size 4276265
|
2024/Adaptive Rank Selections for Low-Rank Approximation of Language Models/full.md
ADDED
|
@@ -0,0 +1,483 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Adaptive Rank Selections for Low-Rank Approximation of Language Models
|
| 2 |
+
|
| 3 |
+
Shangqian Gao, Ting Hua, Yen-Chang Hsu, Yilin Shen, Hongxia Jin
|
| 4 |
+
|
| 5 |
+
Samsung Research America
|
| 6 |
+
|
| 7 |
+
{s.gao1,ting.hua,yenchang.hsu,yilin.shen,hongxia.jin}@samsung.com
|
| 8 |
+
|
| 9 |
+
# Abstract
|
| 10 |
+
|
| 11 |
+
Singular Value Decomposition (SVD) or its weighted variants has significantly progressed in compressing language models. Previous works assume the same importance for all operations and assign the same number of ranks for different layers in a language model. However, such a uniform rank selection is sub-optimal since different operations (layers) have nonuniform demand in capacity. In other words, a desired SVD strategy should allocate more ranks for important operations and vice versa. However, a globally-optimized selection of ranks for neural networks is still an open problem, and this is a non-trivial challenge since the selection is discrete. In this work, we propose a novel binary masking mechanism for optimizing the number of ranks in a differentiable framework. Our strategy uses a novel regularization to enable the masking to comply with the SVD property where the ranks have sorted singular values. The experiments examined both types of language models, encoder-only and decoder-only models, including large language models like LLaMA. Our compressed model achieves much better accuracy than previous SVD and their SOTA variants. More interestingly, our method retains significantly better accuracy with zero or limited fine-tuning, proving the substantial advantage of adaptive rank selection.
|
| 12 |
+
|
| 13 |
+
# 1 Introduction
|
| 14 |
+
|
| 15 |
+
Transformer-based models (Vaswani et al., 2017) have been very popular across different Natural Language Processing tasks, such as text classification (Wang et al., 2019a), question answering (Rajpurkar et al., 2016), and summarization (Liu, 2019). Despite its success on these tasks, the size of these models often scales up to millions or billions of parameters, especially for recently proposed large language models (Touvron et al., 2023; Biderman et al., 2023). Such a huge number of
|
| 16 |
+
|
| 17 |
+
parameters makes these models very hard to be deployed on resource-limited devices, such as mobile phones or edge devices. As a result, the compression of Transformer-based language models has drawn much attention.
|
| 18 |
+
|
| 19 |
+
Transformers-based models have two core operations: self-attention layers and feed-forward layers. These operations are built on linear layers, making them straightforward to compression techniques like low-rank weight factorization (Golub and Reinsch, 1971; Noach and Goldberg, 2020) with SVD or its variants. Low-rank weight factorization decomposes a large linear layer into two small linear layers without changing other model parts, providing a friendly property for deployment. In addition, it is orthogonal to other compression techniques, such as structural pruning (Sanh et al., 2020), quantization (Shen et al., 2020), and knowledge distillation (Sun et al., 2019; Jiao et al., 2019).
|
| 20 |
+
|
| 21 |
+
Previous work (Hsu et al., 2021) shows that using vanilla SVD for compression can result in a significant performance drop. They argue that low reconstruction error is not equivalent to high accuracy. As a result, Hsu et al. (Hsu et al., 2021) proposed to apply the Fisher Information (Pascanu and Bengio, 2014) matrix to re-weight the weight matrix so that the factorization results can capture information from both the task and the reconstruction error. Empirically, Fisher Information weighted SVD performs much better than the original SVD. Despite using the Fisher Information matrix, other importance scores, like first-order Taylor expansion (Molchanov et al., 2019; Hua et al., 2022), can also be used to re-weight the weight matrix.
|
| 22 |
+
|
| 23 |
+
Although the mentioned weighted SVD methods above achieved promising results, they treat all layers uniformly and use the same number of ranks for all weight matrices. On the other hand, some prior works suggest that the compression rate for different layers should be different in the cases of vision (Molchanov et al., 2019) and language (La
|
| 24 |
+
|
| 25 |
+
gunas et al., 2021) models. These observations provide clues to improve the performance of existing weight factorization by selecting the proper number of ranks for each layer. Inspired by the above observations, the target of our problem setting is to find the optimal number of ranks for all the layers in a neural network. However, this optimization is not trivial since it is a discrete, nonsmooth, and non-convex problem. Reinforcement learning (Schulman et al., 2017) and evolutionary algorithms (Real et al., 2019) may find a solution for this problem, but they introduce substantial optimization costs that are not affordable for larger models.
|
| 26 |
+
|
| 27 |
+
To address the above challenge, we propose to use regularized differentiable binary masks to learn the number of ranks for each operation. The entire learning pipeline is built upon an end-to-end differentiable learning framework. We use the sum of a binary mask to capture the number of ranks for each layer. The proposed binary mask is properly regularized to be aligned with the sorted singular values of SVD. Moreover, we use a hypernetwork to improve the effectiveness of our method, which further accelerates the learning process. With all these designs, our method can efficiently find the number of ranks of different operations. The contribution of our work can be summarized as the following points:
|
| 28 |
+
|
| 29 |
+
- We proposed to use the sum of regularized binary masks to capture the number of ranks for different operations. To further improve efficiency, we introduce hypernetwork to generate the number of ranks.
|
| 30 |
+
- We proposed a novel regularization to make binary masks comply with the property of SVD where there are sorted singular values. The regularized binary mask can retain the important factors inherited from SVD or its weighted version.
|
| 31 |
+
- Extensive experiments show that our method can significantly improve the performance of SVD and its SOTA variants on both encoder-only and decoder-only language models.
|
| 32 |
+
|
| 33 |
+
# 2 Related Works
|
| 34 |
+
|
| 35 |
+
The benefit of Low-rank factorization is that it can be applied to any linear layer. An early work (Winata et al., 2019) applies SVD for the LSTM cell and explores the effectiveness on different NLP tasks (Zhang et al., 2021, 2022, 2023)
|
| 36 |
+
|
| 37 |
+
and model components. (Noach and Goldberg, 2020) propose a two-stage approach to compress a pre-trained language model. The first stage decomposes the weight matrix with SVD in the pre-trained language model. Then, they fine-tune weights with knowledge distillation to regain performance. The standard SVD can not capture all the information from tasks. The Fisher Information is introduced to reweight the weight matrix, and SVD is applied to the reweighted matrix (Hsu et al., 2021). On top of (Hsu et al., 2021), several numeric optimization methods are used to find the optimal solution to the weighted SVD problem (Hua et al., 2022) when the weighting matrix is not diagonal.
|
| 38 |
+
|
| 39 |
+
Besides model weights, SVD can also be applied to embedding layers. The ALBERT model (Lan et al., 2019) addresses the issue of redundant parameters in the embedding layer by employing factorization. This layer tends to have high input and output dimensions, leading to inefficiencies. In their work, Reid et al. (Reid et al., 2021) introduce a novel approach called Self-Attentive Factorized Embeddings (SAFE). This method enhances performance by incorporating a small self-attention layer built upon linear projection.
|
| 40 |
+
|
| 41 |
+
A crucial point omitted by previous works is that not all operations are created equally. Some operations require more capacity than others. Our method tackles this problem by automatically learning the number of ranks for each operation.
|
| 42 |
+
|
| 43 |
+
Our method is also related to network pruning methods, especially structural pruning. Block Pruning (Lagunas et al., 2021) integrates structures of any size into the movement pruning paradigm for fine-tuning, and it prunes the model globally. In addition to NLP tasks, deciding the width of a convolution layer has also been studied extensively using reinforcement learning (He et al., 2018), evolutionary algorithm (Liu et al., 2019), etc. Differentiable pruning (Guo et al., 2020; Herrmann et al., 2020; Wang et al., 2019b; Gao et al., 2022, 2023a,b) is also a popular direction since the cost is often not high. However, they can not be directly applied to select the number of ranks due to the cost or difficulty of fine-tuning resulting from using binary masks.
|
| 44 |
+
|
| 45 |
+
# 3 Method
|
| 46 |
+
|
| 47 |
+
# 3.1 Background
|
| 48 |
+
|
| 49 |
+
Transformers have many linear layers, which makes them very suitable for compression methods
|
| 50 |
+
|
| 51 |
+
like Singular Value Decomposition (SVD). Suppose we have a matrix $\mathbf{W} \in \Re^{M \times N}$ , SVD decomposes it into three matrices:
|
| 52 |
+
|
| 53 |
+
$$
|
| 54 |
+
\mathbf {W} = \mathbf {U S V} \approx \mathbf {U} _ {r} \mathbf {S} _ {r} \mathbf {V} _ {r}, \tag {1}
|
| 55 |
+
$$
|
| 56 |
+
|
| 57 |
+
where the orthogonal matrix $\mathbf{U} \in \Re^{M \times M}$ is the left singular vectors, and the orthogonal matrix $\mathbf{V} \in \Re^{N \times N}$ is the right singular vectors. $\mathbf{S}$ is a diagonal matrix of non-zero singular values $\operatorname{Diag}(s) = \operatorname{Diag}(\sigma_1, \sigma_2, \dots, \sigma_N)$ (assuming $M \geq N$ ), where $\sigma_1 \geq \sigma_2 \geq \dots \sigma_N$ . $\mathbf{U}_r, \mathbf{S}_r, \mathbf{V}_r$ represent the truncated matrices with rank $r$ and approximate the original matrix.
|
| 58 |
+
|
| 59 |
+
With the SVD, the computation of a linear layer in a neural network can be rewritten as below with input data $X \in \Re^{B \times M}$ , weight matrix $\mathbf{W} \in \Re^{M \times N}$ , bias $\mathbf{b} \in \Re^{1 \times N}$ :
|
| 60 |
+
|
| 61 |
+
$$
|
| 62 |
+
\mathbf {Y} = \mathbf {X} \mathbf {W} + \mathbf {b} = \mathbf {X} (\mathbf {U} \mathbf {S}) \mathbf {V} ^ {T} + \mathbf {b}. \tag {2}
|
| 63 |
+
$$
|
| 64 |
+
|
| 65 |
+
The standard SVD can be further improved by multiplying a weighting matrix with $\mathbf{W}$ , and this weighting matrix can be computed in many different ways, such as using Fisher Information (Pascanu and Bengio), Importance Estimation (Molchanov et al., 2019), etc. Weighted SVD often performs better than vanilla SVD when compressing language models. Denote the weighting matrix as $\mathbf{I}_w$ , and $\mathbf{I}_w$ is a diagonal matrix where the importance of each weight is summed within each column or row. Then, after applying $\mathbf{I}_w$ , we have:
|
| 66 |
+
|
| 67 |
+
$$
|
| 68 |
+
\mathbf {Y} = \mathbf {X} \mathbf {W} + \mathbf {b} = \mathbf {X} \left[ \mathbf {I} _ {w} ^ {- 1} \left(\mathbf {U} ^ {\prime} \mathbf {S} ^ {\prime}\right) \mathbf {V} ^ {\prime T} \right] + \mathbf {b}. (3)
|
| 69 |
+
$$
|
| 70 |
+
|
| 71 |
+
where $U^{\prime}, S^{\prime}$ , and $V^{\prime}$ come from the weighted SVD decomposition of $\mathbf{I}_w\mathbf{W} = \mathbf{U}'\mathbf{S}'\mathbf{V}'$ . Note that by using SVD or its weighted variants, we can easily compress pre-trained models, which is vital since the training costs of the typical large language models are very high, and training them from scratch is usually prohibitively expensive.
|
| 72 |
+
|
| 73 |
+
# 3.2 Overview
|
| 74 |
+
|
| 75 |
+
In the following contents, we will first introduce how we parameterize the number of ranks. Then we will introduce the hypernetwork used to generate the number of ranks. After that, we will talk about how we overcome the difficulty of fine-tuning caused by directly using indices from binary masks and how to produce top-k-like masks. The overall optimization problem will be introduced last. Fig. 1 illustrates our method given one self-attention layer.
|
| 76 |
+
|
| 77 |
+

|
| 78 |
+
Figure 1: An overview of our method. In the figure, we use the self-attention layer as an example. The hypernetwork produces the number of ranks for each operation, which are then applied to the query, key, and value weights. Since $m$ is differentiable w.r.t to the hypernetwork, we can optimize the number of ranks in an end-to-end differentiable way.
|
| 79 |
+
|
| 80 |
+
# 3.3 Control the Number of Ranks
|
| 81 |
+
|
| 82 |
+
In Equation 2, the diagonal matrix $\mathbf{S}$ contains singular values of SVD. If singular values are equal to zeros, then the corresponding vectors from $\mathbf{U}$ and $\mathbf{V}$ can be safely removed. Usually, the singular values of model weights are non-zero. As a result, we can apply a binary mask $m\in \{0,1\}$ on top of the diagonal matrix $\mathbf{S}$ :
|
| 83 |
+
|
| 84 |
+
$$
|
| 85 |
+
\hat {s} = m \odot s, \tag {4}
|
| 86 |
+
$$
|
| 87 |
+
|
| 88 |
+
where $s$ is the singular vector, and $\mathbf{S} = \mathrm{Diag}(s)$ . After applying $\mathbf{m}$ , it changes Eq. 2 into:
|
| 89 |
+
|
| 90 |
+
$$
|
| 91 |
+
\mathbf {Y} = \mathbf {X} (\operatorname {U D i a g} (\hat {s})) \mathbf {V} ^ {T} + \mathbf {b}, \tag {5}
|
| 92 |
+
$$
|
| 93 |
+
|
| 94 |
+
which inserts the mask $m$ into the forward/backward calculation of a linear layer under SVD decomposition. By doing so, we can calculate the gradients w.r.t $m$ during regular backpropagation. As a result, the mask can be learned in a loss-aware fashion if it is parameterized properly. Note that, unlike the uniform rank selection in previous works (Noach and Goldberg, 2020; Hsu et al., 2021; Hua et al., 2022), our method enables adaptive rank selections for individual operations for the model, which creates flexibility to allocate different ranks for different operations, and we can allocate more parameters for more important operations. Thus, the overall performance can be largely improved over the uniform rank selection setting.
|
| 95 |
+
|
| 96 |
+
# 3.4 Hypernetwork
|
| 97 |
+
|
| 98 |
+
The binary mask $m$ is not differentiable in its plain form; therefore, we incorporate the straight-through Gumbel-Sigmoid (Jang et al., 2016) operation to make it differentiable. In addition, instead of using element-wise mask parameterization,
|
| 99 |
+
|
| 100 |
+
we employ a hypernetwork (HN) to accelerate the learning of masks $m$ . Specifically, $m$ is generated by:
|
| 101 |
+
|
| 102 |
+
$$
|
| 103 |
+
m = \mathbf {H N} (z; \theta), \tag {6}
|
| 104 |
+
$$
|
| 105 |
+
|
| 106 |
+
where $\theta$ is the parameters of the hypernetwork, and $z$ (randomly sampled before training the hypernetwork) is the input to the hypernetwork. Basically, the HN is composed of GRUs (Chung et al., 2014) and linear layers. The intuition is that the GRU can be used to learn interactions between different operations, and linear layers are used to map GRU outputs to individual operations of different sizes. More details of the hypernetwork will be presented in the Appendix.
|
| 107 |
+
|
| 108 |
+
# 3.5 Singular Value-aware Masking
|
| 109 |
+
|
| 110 |
+
The hypernetwork gives the number of ranks and the exact positions of selected ranks for each layer. On the other hand, SVD, or its weighted version, provides sorted singular values in the diagonal matrix $S$ (from Eq. 2). So far, the hypernetwork computes the mask completely independent from the structure of $S$ , which has sorted singular values. This independency can produce a mask that skips some ranks with a high singular value, resulting in a less generalizable selection of ranks. This behavior significantly deteriorated the compressed model, impeding the following fine-tuning process from recovering the accuracy. In the later section, Fig. 3 shows this phenomenon with the exact positions of selected ranks from the hypernetwork (the plot named 'Element-Wise').
|
| 111 |
+
|
| 112 |
+
To address the issue, we choose to use the sum of the binary mask $\mathbf{1}^T m_l$ ( $m_l$ is the mask for $l$ th layer) to represent the number of ranks for the current operation and use this sum to force selecting the top-k ranks. Although this strategy resolves the above issue, it introduces a gap between the learned and actual masks for compressing the model. The gap can be formulated by:
|
| 113 |
+
|
| 114 |
+
$$
|
| 115 |
+
\left\| m _ {l} \odot s - m _ {l} ^ {\prime} \odot s \right\| _ {2} ^ {2}, \tag {7}
|
| 116 |
+
$$
|
| 117 |
+
|
| 118 |
+
where $m_l'$ is a binary mask with the first $\mathbf{1}^T m_l$ elements equals to 1 ( $m_{l[:1}^{\prime}m_{l]}$ = 1), and the rest elements of $m_l'$ equals 0. The smaller the gap, the closer the binary mask $m_l$ to follow the structure of sorted singular values from SVD. The above insight inspired our novel regularization term: $\mathcal{R}_{\text{align}}(m_l) = \| m_l \odot s - m_l' \odot s\|_2^2$ . This regularization can be seamlessly inserted into the
|
| 119 |
+
|
| 120 |
+
# Algorithm 1: Adaptive Rank Selection
|
| 121 |
+
|
| 122 |
+
Input: a sub-dataset for training the HN:
|
| 123 |
+
|
| 124 |
+
$D_{\mathrm{HN}}$ ; remained rate of parameters: $p$ hyper-parameter: $\lambda ,\gamma$ ;HN training iterations: $N_{\mathrm{iter}}$ ; a pre-trained model: $f$ the hypernetwork HN parameterized by $\theta$
|
| 125 |
+
|
| 126 |
+
for $i\coloneqq 1$ to $N_{iter}$ do
|
| 127 |
+
|
| 128 |
+
for a mini-batch $(x,y)$ in $D_{HN}$ do
|
| 129 |
+
|
| 130 |
+
1. generate $m$ from HN with Eq. 6.
|
| 131 |
+
2. calculate the parameter regularization term $\mathcal{R}(pT(m), T_{\mathrm{total}})$ .
|
| 132 |
+
3. calculate the alignment regularization term $\mathcal{R}_{\text {align }}$
|
| 133 |
+
4. calculate gradients w.r.t $\theta$ by minimizing Obj. 8 and update $\theta$ .
|
| 134 |
+
|
| 135 |
+
# end
|
| 136 |
+
|
| 137 |
+
# end
|
| 138 |
+
|
| 139 |
+
Compress the model based on the number of ranks:
|
| 140 |
+
|
| 141 |
+
$\mathbf{US} = (\mathbf{US})_{[::1^T m]}$ , $\mathbf{V} = \mathbf{V}_{[::1^T m]}$ .
|
| 142 |
+
|
| 143 |
+
Return the resulting model for fine-tuning.
|
| 144 |
+
|
| 145 |
+
optimization of the HN without introducing extra parameters. Our ablation study will verify the mentioned insight and prove the effectiveness of $\mathcal{R}_{\text {align }}$ .
|
| 146 |
+
|
| 147 |
+
# 3.6 The Proposed Algorithm
|
| 148 |
+
|
| 149 |
+
For a specific task, to maximally preserve the performance given a parameter budget, we minimize the task loss together with the regularization of the number of parameters and the regularization for aligning the SVD property. The overall objective function is defined by:
|
| 150 |
+
|
| 151 |
+
$$
|
| 152 |
+
\begin{array}{l} \min _ {\theta} \mathcal {L} (f (x; m), y) + \lambda \mathcal {R} (T (m), p T _ {t o t a l}) \\ + \gamma \frac {1}{L} \sum_ {l = 1} ^ {N} \mathcal {R} _ {\text {a l i g n}} (m _ {l}), \tag {8} \\ \end{array}
|
| 153 |
+
$$
|
| 154 |
+
|
| 155 |
+
where $x, y$ are input and its label, $\mathcal{L}$ is the task-specific loss, $f(\cdot; m)$ is the model parameterized by the mask $m$ , $\lambda$ controls the regularization weights for the parameter regularization $\mathcal{R}$ and $\mathcal{R}(a, b) = \log (\max(a, b)/b)$ , $\gamma$ controls the regularization weights of $\mathcal{R}_{\text{align}}$ , $T_{\text{total}}$ is the total number of the parameters, and $p$ is the persevered ratio of parameters which is given by users. $T(m)$ is the number of parameters decided by the number of ranks for each operation. Take $l$ th weight matrix as an example; the number of parameters for
|
| 156 |
+
|
| 157 |
+
<table><tr><td>Task</td><td>MRPC</td><td>STSB</td><td>COLA</td><td>SST-2</td><td>MNLI</td><td>QNLI</td><td>QQP</td><td>Avg</td><td>Δ-Avg</td><td># Params</td></tr><tr><td>BERT-base</td><td>87.29</td><td>88.47</td><td>57.78</td><td>92.90</td><td>84.95</td><td>91.25</td><td>87.92</td><td>84.36</td><td>-</td><td>109.5M</td></tr><tr><td>SVD</td><td>55.88</td><td>23.99</td><td>2.15</td><td>78.10</td><td>35.73</td><td>37.78</td><td>59.70</td><td>41.90</td><td>-42.36</td><td>66.5M</td></tr><tr><td>+ fine-tuning</td><td>83.60</td><td>85.67</td><td>29.02</td><td>91.28</td><td>83.02</td><td>89.35</td><td>87.05</td><td>78.42</td><td>-5.94</td><td>66.5M</td></tr><tr><td>SVD+ARS (ours)</td><td>81.22</td><td>73.78</td><td>0.00</td><td>81.08</td><td>62.75</td><td>57.86</td><td>66.71</td><td>60.48</td><td>-23.88</td><td>65.1M</td></tr><tr><td>+ fine-tuning (ours)</td><td>85.57</td><td>86.30</td><td>47.08</td><td>91.97</td><td>83.55</td><td>89.44</td><td>87.39</td><td>81.61</td><td>-2.75</td><td>65.1M</td></tr><tr><td>IWSVD</td><td>5.52</td><td>58.97</td><td>13.14</td><td>81.31</td><td>46.96</td><td>52.50</td><td>63.30</td><td>45.96</td><td>-38.40</td><td>66.5M</td></tr><tr><td>+ fine-tuning</td><td>86.87</td><td>87.45</td><td>43.83</td><td>89.91</td><td>82.56</td><td>89.35</td><td>86.55</td><td>80.93</td><td>-3.43</td><td>66.5M</td></tr><tr><td>IWSVD+ARS (ours)</td><td>81.58</td><td>76.93</td><td>23.97</td><td>83.94</td><td>51.88</td><td>77.58</td><td>75.05</td><td>67.28</td><td>-17.08</td><td>65.1M</td></tr><tr><td>+ fine-tuning (ours)</td><td>88.13</td><td>88.23</td><td>52.88</td><td>91.40</td><td>83.86</td><td>89.91</td><td>87.59</td><td>83.14</td><td>-1.22</td><td>65.1M</td></tr><tr><td>FWSVD</td><td>68.00</td><td>68.77</td><td>15.69</td><td>79.93</td><td>48.10</td><td>52.65</td><td>66.07</td><td>57.03</td><td>-27.33</td><td>66.5M</td></tr><tr><td>+ fine-tuning</td><td>88.36</td><td>86.90</td><td>45.80</td><td>89.60</td><td>82.54</td><td>89.18</td><td>86.97</td><td>81.34</td><td>-3.02</td><td>66.5M</td></tr><tr><td>FWSVD+ARS (ours)</td><td>81.22</td><td>84.24</td><td>27.22</td><td>83.37</td><td>71.12</td><td>64.10</td><td>75.18</td><td>69.49</td><td>-14.87</td><td>65.1M</td></tr><tr><td>+ fine-tuning (ours)</td><td>89.40</td><td>88.47</td><td>55.01</td><td>91.06</td><td>83.68</td><td>89.68</td><td>87.41</td><td>83.53</td><td>-0.83</td><td>65.1M</td></tr></table>
|
| 158 |
+
|
| 159 |
+
Table 1: Results of GLUE benchmark when $p = 0.48$ . 'Avg' means the average score of the GLUE tasks. The 'Δ-Avg' is the difference of 'Avg' between the full model and different baselines. Given a similar number of parameters, a smaller 'Δ-Avg' represents better performance.
|
| 160 |
+
|
| 161 |
+
it is decided by: $T(m_{l}) = (M_{l} + N_{l}) \times (\mathbf{1}^{T}m_{l})$ , $T(m) = \sum_{l=1}^{L} T(m_{l})$ , where $M_{l}$ and $N_{l}$ is the number of inputs and outputs dimensions for $l$ th weight matrix. Note that the model weights are frozen during the optimization of Obj. 8; therefore, the learnable parameter is small and can be optimized efficiently.
|
| 162 |
+
|
| 163 |
+
We present the algorithm for learning the number of ranks in Alg. 1. It requires only a small subset of the original training data; therefore, the computation overhead to optimize the number of ranks is negligible (details are in the experiment section). Note that all $m_{l}$ are learned jointly in one pass, and rank selection competes across all layers. In other words, important operations can receive more ranks than the less important ones. Finally, we select top $\mathbf{1}^{T}m_{l}$ ranks for each operation to compress a model, as described in Section 3.5.
|
| 164 |
+
|
| 165 |
+
# 4 Experiments
|
| 166 |
+
|
| 167 |
+
# 4.1 Settings
|
| 168 |
+
|
| 169 |
+
We assess our proposed method and baselines using the General Language Understanding Evaluation (GLUE) benchmark (Wang et al., 2019a) and the large language model pre-training task on Pile (Gao et al., 2020). For GLUE tasks, we use BERT (Devlin et al., 2018), MobileBERT (Sun et al., 2020), and DistillBERT (Sanh et al., 2019) to evaluate our method. We use LLaMA-7B (Touvron et al., 2023) to evaluate our method for large language models. In the Appendix, we use models from Pythia Suite (Biderman et al., 2023) to evaluate our method for the language modeling task. Throughout the experiment section, our method is abbreviated as ARS (Adaptive Rankd Selection).
|
| 170 |
+
|
| 171 |
+
To build fair comparison baselines, we compress
|
| 172 |
+
|
| 173 |
+
all linear layers from the model, including self-attention layers and feed-forward networks. In addition, we do not compress the embedding layer, and the compression rate of our method can be further improved by incorporating previous works focusing on compressing the embedding layer (Lan et al., 2019; Reid et al., 2021).
|
| 174 |
+
|
| 175 |
+
Our method aims to find the best number of ranks for each operation. As a result, we will show our method is effective across different choices of weighting matrices (Eq. 3) or no weighing matrix (Eq. 2). For weighted SVD, we choose two kinds of weighting matrix: Fisher information Weighted SVD (FWSVD) (Hsu et al., 2021) and Importance Weighted SVD (IWSVD). For IWSVD, the importance is calculated by directly following the definition from (Molchanov et al., 2019), which is based on the first-order Taylor expansion.
|
| 176 |
+
|
| 177 |
+
For all tasks, we use pre-trained language models as a start, then the model is fine-tuned on downstream tasks, like GLUE or language modeling tasks. After that, we freeze the model weights, and we train the HN based on Obj. 8. The model is then compressed based on the number of ranks produced by the HN. Finally, the model is fine-tuned again on downstream tasks or pre-training tasks.
|
| 178 |
+
|
| 179 |
+
When training the HN, we choose 4000 samples for GLUE tasks (Wang et al., 2019a). If the dataset is smaller than 4000 samples, we use the whole dataset to train the HN. For the language modeling task, we train the HN for 2000 iterations. ADAM (Kingma and Ba, 2015) is used to train the HN with a constant learning rate $1 \times 10^{-3}$ . $\lambda$ and $\gamma$ in Obj. 8 is set to 16 and 10 for all experiments. For all GLUE tasks, the other settings are the default configuration from the HuggingFace
|
| 180 |
+
|
| 181 |
+
<table><tr><td>Task</td><td>MRPC</td><td>STSB</td><td>COLA</td><td>SST-2</td><td>MNLI</td><td>QNLI</td><td>QQP</td><td>Avg</td><td>Δ-Avg</td><td># Params</td></tr><tr><td>BERT-base</td><td>87.29</td><td>88.47</td><td>57.78</td><td>92.90</td><td>84.95</td><td>91.25</td><td>87.92</td><td>84.36</td><td>-</td><td>109.5M</td></tr><tr><td>SVD</td><td>0.00</td><td>17.68</td><td>2.05</td><td>63.88</td><td>36.60</td><td>49.46</td><td>46.56</td><td>30.89</td><td>-53.47</td><td>52.4M</td></tr><tr><td>+ fine-tuning</td><td>81.06</td><td>79.35</td><td>9.83</td><td>89.11</td><td>81.61</td><td>86.99</td><td>86.35</td><td>73.47</td><td>-10.89</td><td>52.4M</td></tr><tr><td>SVD+ARS (ours)</td><td>81.22</td><td>64.23</td><td>0.00</td><td>79.47</td><td>35.73</td><td>52.41</td><td>51.50</td><td>52.08</td><td>-31.38</td><td>52.6M</td></tr><tr><td>+ fine-tuning (ours)</td><td>81.42</td><td>82.85</td><td>27.62</td><td>89.22</td><td>83.07</td><td>87.50</td><td>86.68</td><td>76.91</td><td>-7.45</td><td>52.6M</td></tr><tr><td>IWSVD</td><td>1.42</td><td>23.54</td><td>0.00</td><td>72.48</td><td>41.59</td><td>49.51</td><td>57.54</td><td>35.15</td><td>-49.21</td><td>52.4M</td></tr><tr><td>+ fine-tuning</td><td>80.79</td><td>82.29</td><td>24.49</td><td>88.76</td><td>81.63</td><td>87.46</td><td>86.35</td><td>75.97</td><td>-8.39</td><td>52.4M</td></tr><tr><td>IWSVD+ARS (ours)</td><td>81.22</td><td>68.94</td><td>0.00</td><td>82.57</td><td>60.70</td><td>67.75</td><td>64.30</td><td>60.78</td><td>-23.58</td><td>52.6M</td></tr><tr><td>+ fine-tuning (ours)</td><td>84.87</td><td>86.09</td><td>45.25</td><td>90.02</td><td>82.97</td><td>88.78</td><td>87.13</td><td>80.73</td><td>-3.63</td><td>52.6M</td></tr><tr><td>FWSVD</td><td>0.00</td><td>36.95</td><td>15.69</td><td>72.02</td><td>40.62</td><td>49.46</td><td>52.81</td><td>36.59</td><td>-47.77</td><td>52.4M</td></tr><tr><td>+ fine-tuning</td><td>81.96</td><td>83.41</td><td>45.80</td><td>88.42</td><td>80.67</td><td>87.66</td><td>86.76</td><td>78.34</td><td>-6.02</td><td>52.4M</td></tr><tr><td>FWSVD+ARS (ours)</td><td>81.22</td><td>67.25</td><td>23.55</td><td>81.42</td><td>58.94</td><td>70.49</td><td>63.56</td><td>63.77</td><td>-20.59</td><td>52.6M</td></tr><tr><td>+ fine-tuning (ours)</td><td>85.48</td><td>86.19</td><td>48.79</td><td>90.94</td><td>82.84</td><td>88.45</td><td>87.04</td><td>81.39</td><td>-2.97</td><td>52.6M</td></tr></table>
|
| 182 |
+
|
| 183 |
+
Table 2: Results of GLUE benchmark when $p = {0.33}$ . The definition of 'Avg' and 'Δ-Avg' is same as Tab. 1.
|
| 184 |
+
|
| 185 |
+

|
| 186 |
+
(a) MRPC
|
| 187 |
+
|
| 188 |
+

|
| 189 |
+
(b) STSB
|
| 190 |
+
Figure 2: The number of parameters vs. the performance after fine-tuning for FWSVD and FWSVD+ARS.
|
| 191 |
+
|
| 192 |
+

|
| 193 |
+
(c) COLA
|
| 194 |
+
|
| 195 |
+
Transformer library. We defer other training details of the language modeling task to the Appendix. All of our implementations are based on the Huggingface Transformer library (Wolf et al., 2020) and PyTorch (Paszke et al., 2019).
|
| 196 |
+
|
| 197 |
+
# 4.2 GLUE Results for BERT
|
| 198 |
+
|
| 199 |
+
The GLUE results are shown in Tab. 1. As introduced previously, our method ARS is applied to three baselines: FWSVD, IWSVD, and SVD. For all methods, the uniform baseline from previous works has $66.5\mathrm{M}$ parameters, and it is achieved by removing $67\%$ ranks from the original model. For ARS, the model has $65.1\mathrm{M}$ parameters, which is achieved by setting $p$ in Obj. 8 to $p = 0.48$ .
|
| 200 |
+
|
| 201 |
+
We present results before and after fine-tuning in the table. It is clear that ARS can boost the performance of the uniform SVD, IWSVD, and FWSVD. In particular, before fine-tuning, SVD+ARS performs better than SVD by 18.48 regarding average task performance ('Avg' in the table). After fine-tuning, this gap is 3.19 between SVD and SVD+ARS. By using Fisher Information or other importance scores, the compressed model has a much better performance across different tasks since task related information is injected. With these stronger baselines, our method continuously improves their performance. For IWSVD, our
|
| 202 |
+
|
| 203 |
+
method is 21.32/2.21 (with/without fine-tuning), better than the baseline on average task performance. For FWSVD, our method again is better than the baseline by 12.46 and 2.19 before and after fine-tuning. In summary, ARS can still provide substantial improvements even with stronger baselines.
|
| 204 |
+
|
| 205 |
+
Besides the comparison under the same weighting mechanism, SVD+ARS has a similar or even better performance than weighted SVD like IWSVD and FWSVD. In particular, by finding the proper number of ranks given each operation, SVD+ARS has 60.48/81.61 average task performance. At the same time, IWSVD has 45.96/80.93 average task performance, and the number for FWSVD is 57.03/81.34. SVD+ARS is better than IWSVD, and it has a similar performance as FWSVD. From this perspective, we can say that properly choosing the number of ranks is as important as building a good importance metric for weighted SVD.
|
| 206 |
+
|
| 207 |
+
We further increase the compression rate, and results are shown in Tab. 2. In this setting, we remove $78\%$ of ranks for the baseline model, and we set $p = 0.33$ for the proposed ARS. ARS improves the performance of SVD, IWSVD, and FWSVD across different GLUE tasks. More specifically, $\mathrm{SVD + ARS}$ is better
|
| 208 |
+
|
| 209 |
+
<table><tr><td>Task</td><td>MRPC</td><td>STSB</td><td>COLA</td><td>SST-2</td><td>MNLI</td><td>QNLI</td><td>QQP</td><td>Avg</td><td>Δ-Avg</td><td># Params</td></tr><tr><td>DistillBERT</td><td>88.73</td><td>86.13</td><td>49.75</td><td>90.37</td><td>82.07</td><td>89.2</td><td>86.74</td><td>81.86</td><td>-</td><td>66.9M</td></tr><tr><td>FWSVD</td><td>44.50</td><td>36.23</td><td>15.06</td><td>81.65</td><td>41.58</td><td>72.12</td><td>71.03</td><td>51.74</td><td>-30.12</td><td>45.5M</td></tr><tr><td>+ fine-tuning</td><td>88.12</td><td>84.37</td><td>32.44</td><td>88.07</td><td>79.71</td><td>87.35</td><td>85.65</td><td>77.96</td><td>-3.90</td><td>45.5M</td></tr><tr><td>FWSVD+ARS (ours)</td><td>81.22</td><td>79.10</td><td>21.85</td><td>86.01</td><td>68.64</td><td>79.77</td><td>77.10</td><td>70.53</td><td>-11.33</td><td>44.9M</td></tr><tr><td>+ fine-tuning (ours)</td><td>88.04</td><td>86.43</td><td>43.84</td><td>90.02</td><td>81.49</td><td>87.94</td><td>86.62</td><td>80.63</td><td>-1.23</td><td>44.9M</td></tr><tr><td>MobileBERT</td><td>89.69</td><td>87.24</td><td>51.16</td><td>90.94</td><td>83.41</td><td>90.54</td><td>86.70</td><td>82.81</td><td>-</td><td>24.6M</td></tr><tr><td>FWSVD</td><td>50.99</td><td>57.16</td><td>2.59</td><td>54.59</td><td>46.10</td><td>49.46</td><td>63.58</td><td>46.35</td><td>-36.46</td><td>19.5M</td></tr><tr><td>+ fine-tuning</td><td>87.50</td><td>86.37</td><td>34.42</td><td>88.07</td><td>81.16</td><td>86.67</td><td>86.23</td><td>78.63</td><td>-4.18</td><td>19.5M</td></tr><tr><td>FWSVD+ARS (ours)</td><td>81.22</td><td>81.71</td><td>3.60</td><td>76.83</td><td>73.65</td><td>64.62</td><td>75.74</td><td>65.34</td><td>-17.47</td><td>19.5M</td></tr><tr><td>+ fine-tuning (ours)</td><td>89.60</td><td>87.03</td><td>39.99</td><td>88.19</td><td>83.43</td><td>86.95</td><td>87.23</td><td>80.35</td><td>-2.46</td><td>19.5M</td></tr></table>
|
| 210 |
+
|
| 211 |
+

|
| 212 |
+
(a) MRPC
|
| 213 |
+
|
| 214 |
+

|
| 215 |
+
(b) STSB
|
| 216 |
+
Figure 3: The fine-tuning loss averaging from three different random seeds given $p = 0.48$ with BERT.
|
| 217 |
+
|
| 218 |
+

|
| 219 |
+
(c) COLA
|
| 220 |
+
|
| 221 |
+
than SVD by 20.09/3.44 before and after finetuning. IWSVD+ARS is 25.63/4.76 better than IWSVD, and FWSVD+ARS is 27.18/3.05 better than FWSVD. In general, with a more aggressive compression rate, the advantage of ARS is more obvious. In Fig. 2, we visualize the number of parameters vs. the performance for MRPC, STSB, and COLA between FWSVD and FWSVD+ARS. FWSVD+ARS outperforms FWSVD across nearly all settings, which again demonstrates that selecting the proper number of ranks is important across different compression rates.
|
| 222 |
+
|
| 223 |
+
With both compression rates (Tab. 1 and Tab. 2), ARS is much more effective in retaining performance before fine-tuning than SVD, suggesting that adaptive selection of the number of ranks has the potential for fine-tuning less/free compression.
|
| 224 |
+
|
| 225 |
+
# 4.3 GLUE Results for Compact Models
|
| 226 |
+
|
| 227 |
+
ARS already shows promising results when compressing BERT, and a follow-up question is whether it can improve the results on compact models. To verify this, we apply FWSVD and FWSVD+ARS on DistillBERT (Sanh et al., 2019) and MobileBERT (Sun et al., 2020). We choose FWSVD and FWSVD+ARS since they achieve the best $\Delta$ -Avg on BERT. The overall results are shown in Tab. 3.
|
| 228 |
+
|
| 229 |
+
For DistillBERT, we still uniformly remove $67\%$ of ranks for FWSVD, and we let $p =$
|
| 230 |
+
|
| 231 |
+
Table 3: Results of GLUE benchmark with compact models. The definition of 'Avg' and 'Δ-Avg' is same as Tab. 1.
|
| 232 |
+
|
| 233 |
+
<table><tr><td>Settings</td><td>#Samples</td><td>QQP</td><td>SST-2</td><td>QNLI</td></tr><tr><td rowspan="3">FWSVD+ARS</td><td>4000</td><td>69.75</td><td>83.37</td><td>64.10</td></tr><tr><td>6000</td><td>76.91</td><td>84.63</td><td>77.76</td></tr><tr><td>8000</td><td>77.42</td><td>85.21</td><td>77.87</td></tr><tr><td rowspan="3">+fine-tuning</td><td>4000</td><td>86.97</td><td>91.06</td><td>89.68</td></tr><tr><td>6000</td><td>87.37</td><td>91.06</td><td>90.04</td></tr><tr><td>8000</td><td>87.57</td><td>91.28</td><td>90.48</td></tr></table>
|
| 234 |
+
|
| 235 |
+
Table 4: The effect of the number of samples.
|
| 236 |
+
|
| 237 |
+
<table><tr><td>Settings</td><td>MRPC</td><td>STSB</td><td>COLA</td></tr><tr><td>w/o Rank Selection</td><td>81.66 (-7.74)</td><td>87.02 (-1.50)</td><td>44.84 (-10.17)</td></tr><tr><td>w/o hypernetwork</td><td>88.12 (-1.28)</td><td>88.31 (-0.22)</td><td>49.92 (-5.09)</td></tr><tr><td>w/o R_align</td><td>88.90 (-0.50)</td><td>88.03 (-0.49)</td><td>53.50 (-1.51)</td></tr><tr><td>ARS</td><td>89.40</td><td>88.52</td><td>55.01</td></tr></table>
|
| 238 |
+
|
| 239 |
+
Table 5: Ablation study on BERT when $p = {0.48}$ .
|
| 240 |
+
|
| 241 |
+
0.48 for FWSVD+ARS. Clearly, FWSVD+ARS performs better than FWSVD for DistillBERT, and the gap is 18.79/2.67 regarding average task performance before and after fine-tuning. For MobileBERT, we uniformly remove $40\%$ of the ranks for FWSVD, and we set $p = 0.75$ for FWSVD+ARS. FWSVD+ARS outperforms FWSVD by 18.99/1.71 with or without finetuning. In short, ARs continuously improves low-rank factorization for compact models like MobileBERT or DistillBERT.
|
| 242 |
+
|
| 243 |
+
# 4.4 Compression on LLaMA-7B
|
| 244 |
+
|
| 245 |
+
In this section, we applied our method to LLaMA7B. We removed around $75\%$ of parameters for this setting. We compared our method with three
|
| 246 |
+
|
| 247 |
+

|
| 248 |
+
(a) MRPC
|
| 249 |
+
|
| 250 |
+

|
| 251 |
+
(b) STSB
|
| 252 |
+
Figure 4: The task loss averaging from three different random seeds given $p = 0.48$ with BERT when learning the number of ranks.
|
| 253 |
+
|
| 254 |
+

|
| 255 |
+
(c) COLA
|
| 256 |
+
|
| 257 |
+
<table><tr><td>Tasks</td><td>BoolQ</td><td>HellaSwag</td><td>OBQA</td><td>WinoGrande</td><td>ARC-e</td><td>ARC-c</td><td>Average</td><td>#Params</td></tr><tr><td>LLaMA-7B</td><td>74.98</td><td>76.18</td><td>42.6</td><td>70.01</td><td>72.85</td><td>44.71</td><td>63.56</td><td>6.7B</td></tr><tr><td>LLM-Pruner</td><td>61.47</td><td>47.56</td><td>35.2</td><td>55.09</td><td>46.46</td><td>28.24</td><td>45.67</td><td>3.4B</td></tr><tr><td>Scratch</td><td>57.13</td><td>39.16</td><td>29.4</td><td>49.64</td><td>41.96</td><td>24.57</td><td>40.31</td><td>1.8B</td></tr><tr><td>WSVD</td><td>60.46</td><td>46.62</td><td>31.4</td><td>55.25</td><td>47.81</td><td>26.45</td><td>44.67</td><td>1.8B</td></tr><tr><td>WSVD+ARS</td><td>63.27</td><td>50.97</td><td>32.0</td><td>56.67</td><td>51.89</td><td>26.71</td><td>46.92</td><td>1.7B</td></tr></table>
|
| 258 |
+
|
| 259 |
+
Table 6: Comparison results with LLaMA-7B.
|
| 260 |
+
|
| 261 |
+
bases: (1) training from scratch with a similar number of parameters, (2) WSVD with uniform rank selections, and (3) LLM-pruner (Ma et al., 2023). For WSVD, ARS, and Scratch settings, the compressed models are fine-tuned for 576 A-100 GPU hours, which is less than $1\%$ of the cost for training LLaMA-7B. More training and evaluation details are presented in the appendix. The results are shown in Tab. 6. From the table, we can see that our proposed ARS achieves the best average performance on these 6 tasks. LLM-Pruner performs better on OBQA and ARC-c, but the number of parameters doubles compared to ARS. LLM-Pruner and our method use two different ways to fine-tune model weights, where LLM-Pruner is fine-tuned with LoRA (Hu et al., 2021) on Alpaca (Taori et al., 2023). Our results suggest that fine-tuning with the pre-training setting is more promising than LoRA+Alpaca for a larger compression rate. Training from scratch shows a much worse performance suggesting that compression techniques could be an alternative way to create models with different sizes given limited training budgets.
|
| 262 |
+
|
| 263 |
+
# 4.5 Further Analysis
|
| 264 |
+
|
| 265 |
+
To better understand our method, we present further analysis regarding different perspectives of ARS.
|
| 266 |
+
|
| 267 |
+
(1) The Number of Samples. In Tab. 4, we show the impact of the number of samples when training the HN. For some datasets, increasing the number of samples for the HN is very helpful such as QQP and QNLI. For SST-2, the impact is not obvious. Increasing the number of samples may have some
|
| 268 |
+
|
| 269 |
+
benefits, but the benefit of using too many samples is marginal. The reason is that, unlike model weights, the search space for the number of ranks is much smaller, and the performance gain becomes less obvious when there are enough samples.
|
| 270 |
+
|
| 271 |
+
(2) Ablation Study. In Tab. 5, we present the ablation study results on MRPC, STSB, and COLA. 'w/o Rank Selection' means we ignore the property of SVD and use the index to perform element-wise selections. Under this setting, we find a significant performance drop. We also plot the training loss in Fig. 3. Clearly, the element-wise rank selection hurts the structure of low-rank factorization, making it much more difficult to regain performance by fine-tuning. This suggests that we should follow the property of SVD instead of ignoring it. 'w/o hypernetwork' means that we use a simple baseline with element-wise binary gates and keep other settings intact. In this setting, the performance has an obvious drop, and we found it harder to reach the pre-defined compression rate $p$ , and it is generally more difficult to optimize (takes more steps, oscillating of training losses). Without $\mathcal{R}_{\text{align}}$ , our method suffers from an obvious performance decrease, which verifies the benefit of encouraging masks to follow the sorted singular values from SVD.
|
| 272 |
+
|
| 273 |
+
(3) Effectiveness of HN. We plot the task loss when learning the number of ranks with or without using HN in Fig. 4, which is also the setting of 'w/o hypernetwork' in Tab. 5. It is clear that our method can find a better solution and achieve a
|
| 274 |
+
|
| 275 |
+
<table><tr><td>Device Name</td><td>Quantization</td><td>Model</td><td>Model Size</td><td>Per Token Time</td><td>Tokens Per Sec</td></tr><tr><td rowspan="4">S23 Ultra 12GB</td><td rowspan="2">8-bit</td><td>Llama-7B</td><td>7.6 GB</td><td>301.3ms</td><td>3.3</td></tr><tr><td>ARS-1.7B</td><td>1.9 GB</td><td>55.7 ms</td><td>17.9</td></tr><tr><td rowspan="2">4-bit</td><td>Llama-7B</td><td>4.0 GB</td><td>221.8 ms</td><td>4.5</td></tr><tr><td>ARS-1.7B</td><td>1.1 GB</td><td>47.1 ms</td><td>21.3</td></tr></table>
|
| 276 |
+
|
| 277 |
+
Table 7: Generation speed comparison with our method and the original Llama-7B model
|
| 278 |
+
|
| 279 |
+
<table><tr><td>Settings</td><td>#Params</td><td>WikiText</td><td>PTB</td><td>C4</td></tr><tr><td rowspan="4">ARS</td><td>4.1B</td><td>115.62</td><td>183.12</td><td>117.76</td></tr><tr><td>3.3B</td><td>404.49</td><td>581.23</td><td>389.44</td></tr><tr><td>2.5B</td><td>1177.74</td><td>522.99</td><td>1100.35</td></tr><tr><td>1.7B</td><td>3893.10</td><td>4286.49</td><td>3621.82</td></tr><tr><td rowspan="4">+fine-tuning</td><td>4.1B</td><td>15.98</td><td>20.65</td><td>19.07</td></tr><tr><td>3.3B</td><td>17.07</td><td>21.99</td><td>19.93</td></tr><tr><td>2.5B</td><td>18.53</td><td>23.75</td><td>21.35</td></tr><tr><td>1.7B</td><td>20.54</td><td>26.82</td><td>23.53</td></tr></table>
|
| 280 |
+
|
| 281 |
+
Table 8: The effect of different pruning rates. We report the perplexity on WikiText, PTB, and C4.
|
| 282 |
+
|
| 283 |
+
much faster convergence rate with HN on MRPC, STSB, and COLA. No doubt, HN largely improves the efficiency when learning the number of ranks. The plots of the $\mathcal{R}$ loss are shown in the Appendix. (4) Generation Speed Comparison. In Tab. 7, we further show the generation speed comparison between ARS 1.7B and Llama-7B. Both models are deployed on the mobile device: S23 Ultra 12GB. In short, the generation speed increases as the number of parameters decreases. If both models are quantized to 8 bits, then the generation speed of the ARS 1.7B model is around $4.7\times$ faster than the original model. If both models are quantized to 4 bits, then the generation speed of our model is around $5.4\times$ faster than the original model.
|
| 284 |
+
|
| 285 |
+
(5) The Effect of Different Pruning Rates for Llama-7B. We present the result before and after fine-tuning in Tab. 8. The fine-tuning setting for this experiment is quite short, the model is only fine-tuned on around 0.16B tokens. After a short fine-tuning, the perplexity of the model can be quickly recovered. To recover the general ability of the original model, it still requires a longer fine-tuning period.
|
| 286 |
+
|
| 287 |
+
# 5 Conclusion
|
| 288 |
+
|
| 289 |
+
In this paper, we proposed a new algorithm that adaptively selects the number of ranks for low-rank approximation of language models. We proposed to use a hypernetwork to predict the number of ranks for each operation. The predicted number of ranks is regularized using the SVD property and is encouraged to produce top-k-like binary masks. Our method resolved the issue with the ordinary masking that resulted in element-wise rank selec
|
| 290 |
+
|
| 291 |
+
tions, delivering stable performance gain in a comprehensive collection of experiments. The extensive results also show our advantage over previous low-rank methods with uniform rank selections.
|
| 292 |
+
|
| 293 |
+
# References
|
| 294 |
+
|
| 295 |
+
Stella Biderman, Hailey Schoelkopf, Quentin Anthony, Herbie Bradley, Kyle O'Brien, Eric Hallahan, Mohammad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, et al. 2023. Pythia: A suite for analyzing large language models across training and scaling. arXiv preprint arXiv:2304.01373.
|
| 296 |
+
|
| 297 |
+
Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555.
|
| 298 |
+
|
| 299 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
|
| 300 |
+
|
| 301 |
+
Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, and Connor Leahy. 2020. The Pile: An 800gb dataset of diverse text for language modeling. arXiv preprint arXiv:2101.00027.
|
| 302 |
+
|
| 303 |
+
Leo Gao, Jonathan Tow, Stella Biderman, Sid Black, Anthony DiPofi, Charles Foster, Laurence Golding, Jeffrey Hsu, Kyle McDonell, Niklas Muennighoff, Jason Phang, Laria Reynolds, Eric Tang, Anish Thite, Ben Wang, Kevin Wang, and Andy Zou. 2021. A framework for few-shot language model evaluation.
|
| 304 |
+
|
| 305 |
+
Shangqian Gao, Feihu Huang, Yanfu Zhang, and Heng Huang. 2022. Disentangled differentiable network pruning. In European Conference on Computer Vision, pages 328-345. Springer.
|
| 306 |
+
|
| 307 |
+
Shangqian Gao, Burak Uzkent, Yilin Shen, Heng Huang, and Hongxia Jin. 2023a. Learning to jointly share and prune weights for grounding based vision and language models. In The Eleventh International Conference on Learning Representations.
|
| 308 |
+
|
| 309 |
+
Shangqian Gao, Zeyu Zhang, Yanfu Zhang, Feihu Huang, and Heng Huang. 2023b. Structural alignment for network pruning through partial regularization. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 17402-17412.
|
| 310 |
+
|
| 311 |
+
Gene H Golub and Christian Reinsch. 1971. Singular value decomposition and least squares solutions. In Linear algebra, pages 134-151. Springer.
|
| 312 |
+
|
| 313 |
+
Shaopeng Guo, Yujie Wang, Quanquan Li, and Junjie Yan. 2020. Dmcp: Differentiable markov channel pruning for neural networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 1539-1547.
|
| 314 |
+
|
| 315 |
+
Yihui He, Ji Lin, Zhijian Liu, Hanrui Wang, Li-Jia Li, and Song Han. 2018. Amc: Automl for model compression and acceleration on mobile devices. In Proceedings of the European conference on computer vision (ECCV), pages 784-800.
|
| 316 |
+
Charles Herrmann, Richard Strong Bowen, and Ramin Zabih. 2020. Channel selection using gumbel softmax. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XXVII, pages 241-257. Springer.
|
| 317 |
+
Yen-Chang Hsu, Ting Hua, Sungen Chang, Qian Lou, Yilin Shen, and Hongxia Jin. 2021. Language model compression with weighted low-rank factorization. In International Conference on Learning Representations.
|
| 318 |
+
Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685.
|
| 319 |
+
Ting Hua, Yen-Chang Hsu, Felicity Wang, Qian Lou, Yilin Shen, and Hongxia Jin. 2022. Numerical optimizations for weighted low-rank estimation on language models. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022, pages 1404-1416. Association for Computational Linguistics.
|
| 320 |
+
Eric Jang, Shixiang Gu, and Ben Poole. 2016. Categorical reparameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144.
|
| 321 |
+
Xiaqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. 2019. Tinybert: Distilling bert for natural language understanding. arXiv preprint arXiv:1909.10351.
|
| 322 |
+
Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In ICLR (Poster).
|
| 323 |
+
François Lagunas, Ella Charlaix, Victor Sanh, and Alexander M Rush. 2021. Block pruning for faster transformers. arXiv preprint arXiv:2109.04838.
|
| 324 |
+
Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite bert for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942.
|
| 325 |
+
Yang Liu. 2019. Fine-tune bert for extractive summarization. arXiv preprint arXiv:1903.10318.
|
| 326 |
+
Zechun Liu, Haoyuan Mu, Xiangyu Zhang, Zichao Guo, Xin Yang, Kwang-Ting Cheng, and Jian Sun. 2019. Metapruning: Meta learning for automatic neural network channel pruning. In Proceedings of the IEEE/CVF international conference on computer vision, pages 3296-3305.
|
| 327 |
+
|
| 328 |
+
Xinyin Ma, Gongfan Fang, and Xinchao Wang. 2023. Llm-pruner: On the structural pruning of large language models. arXiv preprint arXiv:2305.11627.
|
| 329 |
+
Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2016. Pointer sentinel mixture models. arXiv preprint arXiv:1609.07843.
|
| 330 |
+
Pavlo Molchanov, Arun Mallya, Stephen Tyree, Iuri Frosio, and Jan Kautz. 2019. Importance estimation for neural network pruning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 11264-11272.
|
| 331 |
+
Matan Ben Noach and Yoav Goldberg. 2020. Compressing pre-trained language models by matrix decomposition. In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing, pages 884-889.
|
| 332 |
+
Razvan Pascanu and Yoshua Bengio. Revisiting natural gradient for deep networks. arXiv preprint arXiv:1301.3584.
|
| 333 |
+
Razvan Pascanu and Yoshua Bengio. 2014. Revisiting natural gradient for deep networks. In In International Conference on Learning Representations (ICLR).
|
| 334 |
+
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems 32, pages 8024-8035. Curran Associates, Inc.
|
| 335 |
+
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383-2392.
|
| 336 |
+
Esteban Real, Alok Aggarwal, Yanping Huang, and Quoc V Le. 2019. Regularized evolution for image classifier architecture search. In Proceedings of the aaai conference on artificial intelligence, volume 33, pages 4780-4789.
|
| 337 |
+
Machel Reid, Edison Marrese-Taylor, and Yutaka Matsuo. 2021. Subformer: Exploring weight sharing for parameter efficiency in generative transformers. In *Findings of the Association for Computational Linguistics: EMNLP* 2021, pages 4081-4090.
|
| 338 |
+
Victor Sanh, Lysandre Debut, Julien Chaumont, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.
|
| 339 |
+
|
| 340 |
+
Victor Sanh, Thomas Wolf, and Alexander Rush. 2020. Movement pruning: Adaptive sparsity by fine-tuning. Advances in Neural Information Processing Systems, 33:20378-20389.
|
| 341 |
+
John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347.
|
| 342 |
+
Sheng Shen, Zhen Dong, Jiayu Ye, Linjian Ma, Zhewei Yao, Amir Gholami, Michael W Mahoney, and Kurt Keutzer. 2020. Q-bert: Hessian based ultra low precision quantization of bert. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 8815-8821.
|
| 343 |
+
Siqi Sun, Yu Cheng, Zhe Gan, and Jingjing Liu. 2019. Patient knowledge distillation for bert model compression. arXiv preprint arXiv:1908.09355.
|
| 344 |
+
Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou. 2020. Mobilebert: a compact task-agnostic bert for resource-limited devices. arXiv preprint arXiv:2004.02984.
|
| 345 |
+
Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. 2023. Stanford alpaca: An instruction-following llama model. https://github.com/tatsu-lab/stanford_alpaca.
|
| 346 |
+
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothee Lacroix, Baptiste Roziere, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971.
|
| 347 |
+
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30.
|
| 348 |
+
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2019a. Glue: A multi-task benchmark and analysis platform for natural language understanding. In 7th International Conference on Learning Representations, ICLR 2019.
|
| 349 |
+
Ziheng Wang, Jeremy Wohlwend, and Tao Lei. 2019b. Structured pruning of large language models. arXiv preprint arXiv:1910.04732.
|
| 350 |
+
Genta Indra Winata, Andrea Madotto, Jamin Shin, Elham J Barezi, and Pascale Fung. 2019. On the effectiveness of low-rank matrix factorization for LSTM model compression. arXiv preprint arXiv:1908.09982.
|
| 351 |
+
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen,
|
| 352 |
+
|
| 353 |
+
Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.
|
| 354 |
+
Mengzhou Xia, Zexuan Zhong, and Danqi Chen. 2022. Structured pruning learns compact and accurate models. In Association for Computational Linguistics (ACL).
|
| 355 |
+
Zeyu Zhang, Thuy Vu, Sunil Gandhi, Ankit Chadha, and Alessandro Moschitti. 2022. Wdrass: A web-scale dataset for document retrieval and answer sentence selection. CIKM '22, page 4707-4711, New York, NY, USA. Association for Computing Machinery.
|
| 356 |
+
Zeyu Zhang, Thuy Vu, and Alessandro Moschitti. 2021. Joint models for answer verification in question answering systems. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3252-3262, Online. Association for Computational Linguistics.
|
| 357 |
+
Zeyu Zhang, Thuy Vu, and Alessandro Moschitti. 2023. Double retrieval and ranking for accurate question answering. In *Findings of the Association for Computational Linguistics: EACL* 2023, pages 1751-1762, Dubrovnik, Croatia. Association for Computational Linguistics.
|
| 358 |
+
|
| 359 |
+
# A Limitations
|
| 360 |
+
|
| 361 |
+
Our work adaptively learns the number of ranks for each layer for individual tasks. As a result, the limitation of our method is that we always need to find a new configuration of the number of ranks for a new task, where the learned number of ranks for previous tasks can not be re-used on new tasks. This limitation brings additional computational costs for each new task. Fortunately, this additional cost is trivial on large datasets. For example, it takes 2.2 V100 GPU hours to train BERT on MNLI. Training HN on 4000 samples costs around 0.1 V100 GPU hours on this task.
|
| 362 |
+
|
| 363 |
+
Alternatively, if we can use some statistics to capture the dataset distribution and incorporate it to learn the number of ranks, we may be able to predict the proper configuration of the number of ranks based on some statistics about the data distribution of the new task. However, this may substantially increase the training time for HN since the problem becomes much more complex.
|
| 364 |
+
|
| 365 |
+
# B The Architecture of Hypernetwork
|
| 366 |
+
|
| 367 |
+
Table A1: The architecture of hypernetwork.
|
| 368 |
+
|
| 369 |
+
<table><tr><td>Input z</td></tr><tr><td>Bi-GRU(32,64)→LayerNorm→GeLU</td></tr><tr><td>Linearl(128, Nl)→Outputs ol, l = 1,···, L</td></tr></table>
|
| 370 |
+
|
| 371 |
+
As we discussed in the paper, the Hypernetwork is composed of linear layers and Bi-GRUs, and now we present the architecture of the HN in Tab. A1. $z$ is initially sampled from a normal distribution, and it is then fixed during training. Outputs $o_l$ are continuous values. We use the following equation to covert it into $m_l$ :
|
| 372 |
+
|
| 373 |
+
$$
|
| 374 |
+
m _ {l} = \operatorname {r o u n d} (\operatorname {s i g m o i d} ((o _ {l} + g + b) / \tau)), \tag {9}
|
| 375 |
+
$$
|
| 376 |
+
|
| 377 |
+
where $\operatorname{sigmoid}(\cdot)$ is the sigmoid function, $\operatorname{round}(\cdot)$ is the rounding function, $g$ is sampled from Gumbel distribution ( $g \sim \mathrm{Gumbel}(0,1)$ ), $b$ is a constant value to make sure HN starts from the full rank, and $\tau$ is the temperature hyper-parameter. As shown in Eq. 9, straight-through Gumbel-Sigmoid (Jang et al., 2016) are used to produce the final binary vector $m$ . For all experiments, we set $\tau = 0.4$ and $b = 3.0$ .
|
| 378 |
+
|
| 379 |
+
<table><tr><td>Dataset</td><td>Tables</td><td>Models</td><td>p</td><td>r</td></tr><tr><td rowspan="4">GLUE</td><td>Tab. 1</td><td>BERT-base</td><td>0.48</td><td>0.33</td></tr><tr><td>Tab. 2</td><td>BERT-base</td><td>0.33</td><td>0.22</td></tr><tr><td>Tab. 3</td><td>DistillBERT</td><td>0.48</td><td>0.33</td></tr><tr><td>Tab. 3</td><td>MobileBERT</td><td>0.75</td><td>0.60</td></tr><tr><td>Pile</td><td>Tab. 6</td><td>LLaMA-7B</td><td>0.24</td><td>0.15</td></tr><tr><td>WikiText-103</td><td>Tab. A4</td><td>Pythia-160m</td><td>0.48</td><td>0.36</td></tr></table>
|
| 380 |
+
|
| 381 |
+
Table A2: Choice of $p$ for different models. $p$ is the remained number of parameters divided by the total parameters. 'r' represents the ratio of ranks uniformly preserved by SVD, IWSVD, and FWSVD.
|
| 382 |
+
|
| 383 |
+
# C Implementation and Training Details
|
| 384 |
+
|
| 385 |
+
For BERT based models on GLUE tasks (Wang et al., 2019a), we use Huggingface (Wolf et al., 2020) codes for experiments, which is under Apache 2.0 license. We use the lit-llama https://github.com/Lightning-AI/lit-llama, also with Apache 2.0 license, codes for fine-tuning the Pythia (Biderman et al., 2023) model on the WikiText (Merit et al., 2016) dataset. The lit-llama code is also used to fine-tune the compressed LLaMA-7B models on Pile (Gao et al., 2020).
|
| 386 |
+
|
| 387 |
+
GLUE (Wang et al., 2019a) contains nine English sentence understanding tasks, which cover a broad range of domains, data quantities, and difficulties. Pile (Gao et al., 2020) is an 825 GiB English text corpus targeted at training large-scale language models. The Pile is constructed from 22 diverse high-quality subsets—both existing and newly constructed—many of which derive from academic or professional sources. The WikiText language modeling dataset (Merit et al., 2016) is a collection of over 100 million tokens extracted from the set of verified Good and Featured articles on Wikipedia. The dataset is available under the Creative Commons Attribution-ShareAlike License. We follow all intended usage of licenses of the datasets and codebase we used.
|
| 388 |
+
|
| 389 |
+
For all GLUE tasks, we train the HN on 4000 samples (randomly sampled) for 8 epochs. If the dataset has less than 4000 samples, we train the HN on the whole dataset for 8 epochs. For both HN training and BERT training, we set the mini-batch size to 32, and it is trained on 1 Nvidia-V100 GPU.
|
| 390 |
+
|
| 391 |
+
For the language modeling task, we directly use the pre-trained Pythia-160m model and fine-tune it on the WikiText-103 dataset. We set the sequence length to 512, and the mini-batch size is 64. The initial learning rate is $2 \times 10^{-5}$ , and the learning rate is linearly decayed. We also list choices of $p$ for different tasks and choices of the preserved ratio
|
| 392 |
+
|
| 393 |
+

|
| 394 |
+
(a) MRPC
|
| 395 |
+
|
| 396 |
+

|
| 397 |
+
|
| 398 |
+

|
| 399 |
+
|
| 400 |
+

|
| 401 |
+
(d) MRPC
|
| 402 |
+
|
| 403 |
+

|
| 404 |
+
(b) STSB
|
| 405 |
+
(e) STSB
|
| 406 |
+
|
| 407 |
+

|
| 408 |
+
(c) COLA
|
| 409 |
+
(f) COLA
|
| 410 |
+
|
| 411 |
+

|
| 412 |
+
Figure A1: The number of parameters vs. the performance after fine-tuning for SVD/IWSVD and with our ARS variants.
|
| 413 |
+
(a) MRPC
|
| 414 |
+
|
| 415 |
+

|
| 416 |
+
(b) STSB
|
| 417 |
+
|
| 418 |
+

|
| 419 |
+
(c) COLA
|
| 420 |
+
|
| 421 |
+

|
| 422 |
+
Figure A2: The parameter regularization loss $\mathcal{R}$ averaging from three different random seeds given $p = 0.48$ with BERT when learning the number of ranks.
|
| 423 |
+
Figure A3: Perplexity for different compression rates before fine-tuning on the WikiText-103 dataset.
|
| 424 |
+
|
| 425 |
+
of ranks by SVD, IWSVD, and FWSVD in Tab. A2. The model is trained for 24,000 iterations in total. We use 2 Nvidia-A100 GPUs for this experiment.
|
| 426 |
+
|
| 427 |
+
For LLaMA-7B, we set $p = 0.24$ and we train HN on the Pile validation dataset on 2 Nvidia-A100 GPUs for 4000 iterations. We use the constant learning rate $1 \times 10^{-3}$ for this stage. After compression, the model is trained on Pile (Gao et al., 2020) training set with 8 Nvidia-A100 GPUs, mini
|
| 428 |
+
|
| 429 |
+
batch size 48, block size 2048, and a start learning rate of $5 \times 10^{-5}$ . We use the cosine scheduler for learning rate decay, and the final learning rate is $5 \times 10^{-6}$ . The model is trained for 210,000 steps and the training can be completed within 3 days. The total training tokens are around 20B. Our training code is built on lit-llama: https://github.com/Lightning-AI/lit-llama. We use llm-eval-harness (Gao et al., 2021) to evaluate the compressed model.
|
| 430 |
+
|
| 431 |
+
# D Importance Calculation
|
| 432 |
+
|
| 433 |
+
In this section, we will briefly review the Fisher Information and the other importance scores used in our paper. The Fisher Information measures the amount of information that an observable dataset $D$ carries about a model parameter $w$ . More specif
|
| 434 |
+
|
| 435 |
+

|
| 436 |
+
(a) MRPC-BERT
|
| 437 |
+
|
| 438 |
+

|
| 439 |
+
(b) WikiText - Pythia-160m
|
| 440 |
+
|
| 441 |
+
ically,
|
| 442 |
+
|
| 443 |
+
$$
|
| 444 |
+
\begin{array}{l} \mathbf {I} _ {w} ^ {\mathrm {F I}} = \mathbf {E} \left[ \frac {\partial}{\partial w} \left(\log p (D | w)\right) ^ {2} \right] \\ \approx \frac {1}{| D |} \sum_ {i = 1} ^ {| D |} (\frac {\partial}{\partial w} \mathcal {L} (f (x _ {i}; w), y _ {i})) ^ {2}. \\ \end{array}
|
| 445 |
+
$$
|
| 446 |
+
|
| 447 |
+
For IWSVD, the importance score follows the definition from (Molchanov et al., 2019):
|
| 448 |
+
|
| 449 |
+
$$
|
| 450 |
+
\mathbf {I} _ {w} ^ {\operatorname {I m p}} = \left(\frac {\partial \mathcal {L}}{\partial w} w\right) ^ {2}.
|
| 451 |
+
$$
|
| 452 |
+
|
| 453 |
+
# E Additional Results
|
| 454 |
+
|
| 455 |
+
We further provide the result of #Params vs. performance for SVD/IWSVD and our ARS variants in Fig. A1. SVD+ARS and IWSVD+ARS clearly outperform SVD/IWSVD for almost all compression rates. We also visualize the perplexity before fine-tuning for different compression rates in Fig. A3. FWSVD+ARS outperforms FWSVD at every compression rate for the number of ranks. At a higher compression rate, the perplexity of FWSVD+ARS is often a magnitude lower than WSVD, which shows the advantage of adaptive selections of the number of ranks.
|
| 456 |
+
|
| 457 |
+
In Fig. A4, we visualize the number of ranks selected by ARS across each operation. In Fig. A4a, ARS allocates more ranks for early to middle layers for MRPC. In Fig. A4b, ARS allocates more ranks for both early and late layers for WikiText. The difference between MPRC and WikiText is probably because the language modeling task focuses on both input contexts and output predictions, and MRPC only needs to measure whether input sentences are equivalent and it is not complex. In summary, ARS can produce different selections of the number of ranks based on different tasks.
|
| 458 |
+
|
| 459 |
+
To provide a more detailed understanding on the effectiveness of HN, we plot the parameter regularization loss $\mathcal{R}$ with or without HN. The $\mathcal{R}$ loss is normalized between 0 and 1 for better visualization.
|
| 460 |
+
|
| 461 |
+

|
| 462 |
+
Figure A4: The number of ranks selected by FWSVD+ARS for different tasks.
|
| 463 |
+
Figure A5: Training loss on WikiText.
|
| 464 |
+
|
| 465 |
+
In Fig. A2, we can see that our method with HN can quickly reduce the parameter loss $\mathcal{R}$ . Without HN, the $\mathcal{R}$ loss keeps bumping and it seems hard to reach the desired parameter budget without HN.
|
| 466 |
+
|
| 467 |
+
# F Language Modeling Task with Pythia
|
| 468 |
+
|
| 469 |
+
We further apply our method to the language modeling task on WikiText-103 (Merit et al., 2016) dataset. Results are shown in Tab. A4. From the table, we can see that FWSVD+ARS performs much better than FWSVD. In particular, FWSVD+ARS compresses $6\%$ more parameters than FWSVD, and the perplexity of it is 3.07 and 3.24 lower than FWSVD on the test and validation splits. FWSVD+ARS even performs better than the baseline on the test split. These results again demonstrate the importance of selecting the number of ranks across different tasks. In Fig. A5, we visualize the training loss of FWSVD and FWSVD+ARS during fine-tuning on WikiText. FWSVD+ARS always starts at a lower loss value, and the gap between FWSVD and ARS is maintained till the end of training. By properly choosing the number of ranks, we obtain a model more suitable for the task, making it easier to regain performance.
|
| 470 |
+
|
| 471 |
+
# G Comparison with Pruning Methods
|
| 472 |
+
|
| 473 |
+
We provide further comparison results against structural pruning methods in Tab. A3. For IE (Molchanov et al., 2019), we built this structural pruning baseline for compression language
|
| 474 |
+
|
| 475 |
+
<table><tr><td>Task</td><td>MRPC</td><td>STSB</td><td>COLA</td><td>SST-2</td><td>MNLI</td><td>QNLI</td><td>QQP</td><td>Avg</td><td># Params</td></tr><tr><td>IE (Molchanov et al., 2019)</td><td>45.58</td><td>64.90</td><td>8.04</td><td>66.92</td><td>48.82</td><td>49.48</td><td>50.70</td><td>47.77</td><td>66.8M</td></tr><tr><td>+ fine-tuning</td><td>87.03</td><td>86.74</td><td>38.12</td><td>89.01</td><td>83.86</td><td>88.29</td><td>85.92</td><td>79.58</td><td>66.8M</td></tr><tr><td>IWSVD+ARS (ours)</td><td>81.58</td><td>76.93</td><td>23.97</td><td>83.94</td><td>51.88</td><td>77.58</td><td>75.05</td><td>67.28</td><td>65.1M</td></tr><tr><td>+ fine-tuning (ours)</td><td>88.13</td><td>88.23</td><td>52.88</td><td>91.40</td><td>83.86</td><td>89.91</td><td>87.59</td><td>83.14</td><td>65.1M</td></tr><tr><td>CoFiPruning (Xia et al., 2022)</td><td>87.70</td><td>86.90</td><td>43.16</td><td>89.50</td><td>82.94</td><td>87.73</td><td>86.35</td><td>80.61</td><td>66.7M</td></tr><tr><td>WSVD+ARS+fine-tuning (ours)</td><td>89.40</td><td>88.52</td><td>55.01</td><td>91.06</td><td>83.68</td><td>89.68</td><td>87.41</td><td>83.54</td><td>65.1M</td></tr></table>
|
| 476 |
+
|
| 477 |
+
Table A3: Comparison against structural pruning methods.
|
| 478 |
+
|
| 479 |
+
<table><tr><td>Settings</td><td>Test (ppl)</td><td>Val (ppl)</td><td>#PT</td><td>#PM</td><td>↓ #PM</td></tr><tr><td>Pythia-160m</td><td>25.09</td><td>24.97</td><td>162.3m</td><td>85.0M</td><td>-</td></tr><tr><td>FWSVD</td><td>18331.07</td><td>20525.75</td><td>123.2M</td><td>46.0M</td><td>45.9%</td></tr><tr><td>+fine-tuning</td><td>28.05</td><td>29.07</td><td>123.2M</td><td>46.0M</td><td>45.9%</td></tr><tr><td>FWSVD+ARS</td><td>3020.17</td><td>3041.04</td><td>118.0M</td><td>40.8M</td><td>52.0%</td></tr><tr><td>+fine-tuning</td><td>24.98</td><td>25.83</td><td>118.0M</td><td>40.8M</td><td>52.0%</td></tr></table>
|
| 480 |
+
|
| 481 |
+
Table A4: Results of the language modeling task on WikiText-103. 'PT' represents the total number of parameters. 'PM' represents the number of model parameters excluding the Embedding layer. 'ppl' represents perplexity.
|
| 482 |
+
|
| 483 |
+
models based on the original method. The training and fine-tuning settings are the same as our method. We compared it with IWSVD+AES since they use the same importance. Our method has a better average task performance before and after fine-tuning. For CoFipruning (Xia et al., 2022), We use the GitHub repository of CoFipruning, and modify some hyperparameters to build a fair comparison baseline. We set the fine-tuning epoch of CoFipruning to 3 epochs which is the same as our method. In addition, the first stage of CoFipruning is reduced to 20 epochs for small datasets and 5 epochs for large datasets. Recall that our method first trains the model for 3 epochs for each task and the hypernetwork is trained at most for 1000 steps. As a result, even though we reduced the training time for CoFipruning, it still has a larger computational cost than our method. In addition, we turned off the knowledge distillation of CoFipruning since our method does not rely on any form of knowledge distillation. Our method still has a clear advantage in this setting.
|
2024/Adaptive Rank Selections for Low-Rank Approximation of Language Models/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a76f40a8e22984dbbfe6e1b33503741c9ce2c80e5fedb5cb137e1997307982bc
|
| 3 |
+
size 888201
|
2024/Adaptive Rank Selections for Low-Rank Approximation of Language Models/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/Adaptive-RAG_ Learning to Adapt Retrieval-Augmented Large Language Models through Question Complexity/59b18015-e896-4681-bd77-ccb145a03e89_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/Adaptive-RAG_ Learning to Adapt Retrieval-Augmented Large Language Models through Question Complexity/59b18015-e896-4681-bd77-ccb145a03e89_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/Adaptive-RAG_ Learning to Adapt Retrieval-Augmented Large Language Models through Question Complexity/59b18015-e896-4681-bd77-ccb145a03e89_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ad2117dc4530b4dbbd910013f14f0dcbc62582502d0eb0fcb735bd5666b30eb5
|
| 3 |
+
size 497331
|
2024/Adaptive-RAG_ Learning to Adapt Retrieval-Augmented Large Language Models through Question Complexity/full.md
ADDED
|
@@ -0,0 +1,313 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Adaptive-RAG: Learning to Adapt Retrieval-Augmented Large Language Models through Question Complexity
|
| 2 |
+
|
| 3 |
+
Soyeong Jeong $^{1}$ Jinheon Baek $^{2}$ Sukmin Cho $^{1}$ Sung Ju Hwang $^{1,2}$ Jong C. Park $^{1*}$
|
| 4 |
+
|
| 5 |
+
School of Computing Graduate School of AI
|
| 6 |
+
|
| 7 |
+
Korea Advanced Institute of Science and Technology $^{1,2}$
|
| 8 |
+
|
| 9 |
+
{starsuzi, jinheon.baek, nellllpic, sjhwang82,jongpark}@kaist.ac.kr
|
| 10 |
+
|
| 11 |
+
# Abstract
|
| 12 |
+
|
| 13 |
+
Retrieval-Augmented Large Language Models (LLMs), which incorporate the non-parametric knowledge from external knowledge bases into LLMs, have emerged as a promising approach to enhancing response accuracy in several tasks, such as Question-Answering (QA). However, even though there are various approaches dealing with queries of different complexities, they either handle simple queries with unnecessary computational overhead or fail to adequately address complex multi-step queries; yet, not all user requests fall into only one of the simple or complex categories. In this work, we propose a novel adaptive QA framework that can dynamically select the most suitable strategy for (retrieval-augmented) LLMs from the simplest to the most sophisticated ones based on the query complexity. Also, this selection process is operationalized with a classifier, which is a smaller LM trained to predict the complexity level of incoming queries with automatically collected labels, obtained from actual predicted outcomes of models and inherent inductive biases in datasets. This approach offers a balanced strategy, seamlessly adapting between the iterative and single-step retrieval-augmented LLMs, as well as the no-retrieval methods, in response to a range of query complexities. We validate our model on a set of open-domain QA datasets, covering multiple query complexities, and show that ours enhances the overall efficiency and accuracy of QA systems, compared to relevant baselines including the adaptive retrieval approaches. Code is available at: https://github.com/starsuzi/Adaptive-RAG.
|
| 14 |
+
|
| 15 |
+
# 1 Introduction
|
| 16 |
+
|
| 17 |
+
Recent Large Language Models (LLMs) (Brown et al., 2020; OpenAI, 2023; Touvron et al., 2023; Anil et al., 2023) have shown overwhelming performances across diverse tasks, including question-
|
| 18 |
+
|
| 19 |
+

|
| 20 |
+
Figure 1: QA performance (F1) and efficiency (Time/Query) for different retrieval-augmented generation approaches. We use the GPT-3.5-Turbo-Instruct as the base LLM.
|
| 21 |
+
|
| 22 |
+
answering (QA) (Yang et al., 2018; Kwiatkowski et al., 2019). However, they still generate factually incorrect answers since their knowledge solely relies on their parametric memory (Kasai et al., 2022; Mallen et al., 2023). Meanwhile, memorizing all the (ever-changing) world knowledge may not be possible. To address this problem, retrieval-augmented LLMs (Borgeaud et al., 2022; Izacard et al., 2023; Shi et al., 2023), which incorporate non-parametric knowledge into LLMs with additional retrieval modules, have gained much increasing attention. Specifically, these models access a knowledge base, which serves as an extensive repository of information across various subjects and disciplines, to retrieve information relevant to the given input, and then incorporate the retrieved information into LLMs, which enables them to stay accurate and current with the world knowledge.
|
| 23 |
+
|
| 24 |
+
A particularly salient application of retrieval-augmented LLMs is to handling QA tasks, whose goal is to provide correct answers in response to user queries, especially those of high complexity. Early work on retrieval-augmented LLMs focuses primarily on single-hop queries (Lazaridou et al., 2022; Ram et al., 2023), whose answers are typically found within a single document; therefore, this approach involves retrieving a relevant document based on the query and subsequently integrating this information into QA models to formulate a response. However, unlike this single-hop QA, some queries require connecting and aggregating multiple documents, which are, furthermore,
|
| 25 |
+
|
| 26 |
+

|
| 27 |
+
Figure 2: A conceptual comparison of different retrieval-augmented LLM approaches to question answering. (A) In response to a query, this single-step approach retrieves relevant documents and then generates an answer. However, it may not be sufficient for complex queries that require multi-step reasoning. (B) This multi-step approach iteratively retrieves documents and generates intermediate answers, which is powerful yet largely inefficient for the simple query since it requires multiple accesses to both LLMs and retrievers. (C) Our adaptive approach can select the most suitable strategy for retrieval-augmented LLMs, ranging from iterative, to single, to even no retrieval approaches, based on the complexity of given queries determined by our classifier.
|
| 28 |
+
|
| 29 |
+
often not answerable through a single-step process of retrieval-and-response. An example query is 'When did the people who captured Malakoff come to the region where Philipsburg is located?', which requires four reasoning steps to solve. Therefore, to effectively handle such complex queries, recent studies have concentrated largely on multi-step and multi-reasoning QA, which requires iterative accesses to both LLMs and retrievers multiple times (Press et al., 2023; Trivedi et al., 2023), at the cost of heavy computational overheads.
|
| 30 |
+
|
| 31 |
+
Yet, we should rethink: In a real-world scenario, are all the requests from users complex? Instead, users might often ask simple and straightforward questions, while only occasionally asking complex ones. Specifically, a query such as 'Paris is the capital of what?' is likely to be asked more frequently, compared to the aforementioned multi-step query, and this simpler query might also be easily answered by the LLMs themselves, without accessing external knowledge. In other words, a multi-step QA approach could give rise to unnecessary computational overhead for simple queries, even though it would be vital for complex queries (see Figure 2 (A)). On the other hand, handling complex queries with single-step-retrieval or even non-retrieval strategies would be largely insufficient (Figure 2 (B)). This suggests the need for an adaptive QA system, which can dynamically adjust the operational strategies of retrieval-augmented LLMs based on the query complexity. While some recent approaches are capable of doing this based on the frequency of entities in queries (Mallen et al., 2023; Zhao et al., 2023) or on the generated outputs from models for multi-step QA (Trivedi et al., 2023), they are still suboptimal: the former methods are overly simplistic, failing to consider multi-hop queries; meanwhile, the latter are excessively complex, terminating answer solving steps after several rounds of module access.
|
| 32 |
+
|
| 33 |
+
In this work, considering diverse complexity levels of real-world queries, we argue that previous one-size-fits-all approaches might be inadequate to cover all of them. Instead, we propose to select the most suitable strategy from a range of (retrieval-augmented) LLMs, each of which is tailored to the specific complexity of the input query. Notably, a critical step in this process is pre-defining the query complexity, which is instrumental in determining the most fitting model to it. In this work, we operationalize this process with a novel classifier, which is a smaller model trained to predict the complexity level of incoming queries (see Figure 2 (c)). Moreover, we automatically collect its training datasets without human labeling, by leveraging the predicted outcomes (i.e., which models accurately respond to which queries) as well as by capitalizing on the inherent biases in existing datasets (i.e., samples in the datasets are designed either for single-step or for multi-step QA scenarios). This proposed method can offer a robust middle ground among the iterative LLM augmentation methods for complex queries, single-step methods for simpler queries, and even no-retrieval-augmented methods for the most straightforward queries (answerable by LLMs themselves), thus significantly enhancing the overall efficiency and accuracy, as shown in Figure 1. We refer to our framework as Adaptive Retrieval-Augmented Generation (Adaptive-RAG).
|
| 34 |
+
|
| 35 |
+
We validate Adaptive-RAG using benchmark open-domain QA datasets, covering a wide range of query complexity from single-hop (Rajpurkar et al., 2016; Joshi et al., 2017; Kwiatkowski et al., 2019) to multi-hop (Yang et al., 2018; Ho et al., 2020; Trivedi et al., 2022b) queries. The experimental results show that ours significantly improves the overall accuracy and efficiency, compared to the prior adaptive strategies, on multiple LLMs, such as GPT-3.5 (Brown et al., 2020) and FLAN-T5 series (Chung et al., 2022).
|
| 36 |
+
|
| 37 |
+
Our contributions and findings are threefold:
|
| 38 |
+
|
| 39 |
+
- We point out the realistic scenario of queries of varying complexities, and find out that existing retrieval-augmented generation approaches tend to be overly simple or complex.
|
| 40 |
+
- We adapt retrieval-augmented LLMs to the query complexity assessed by the classifier, which enables the utilization of the most suitable approach tailored to each query.
|
| 41 |
+
- We show that our Adaptive-RAG is highly effective and efficient, balancing between the complexity and the simplicity for diverse queries.
|
| 42 |
+
|
| 43 |
+
# 2 Related Work
|
| 44 |
+
|
| 45 |
+
Open-domain QA Open-domain QA is the task of accurately answering a query by sourcing for query-relevant documents, and then interpreting them to provide answers (Chen et al., 2017; Zhu et al., 2021), which, thus, generally involves two modules: a retriever (Karpukhin et al., 2020; Xiong et al., 2021) and a reader (Yang et al., 2019; Izacard and Grave, 2021; Jeong et al., 2023). Along with the emergence of LLMs with superior reasoning capabilities thanks to their billion-sized parameters (Wei et al., 2022a), a synergy between LLMs and retrievers has led to significant advancements (Lazaridou et al., 2022; Ram et al., 2023). Specifically, this integration has been shown to enhance Open-domain QA by mitigating the hallucination problem from LLMs through strengthened reasoning abilities of the reader, as well as utilizing the retrieved, external documents (Cho et al., 2023). Despite these advancements for single-hop retrieval-augmented LLMs, however, the complexity of some queries needs a more complex strategy.
|
| 46 |
+
|
| 47 |
+
Multi-hop QA Multi-hop QA is an extension of conventional Open-domain QA, which additionally requires the system to comprehensively gather and contextualize information from multiple documents (often iteratively), to answer more complex queries (Trivedi et al., 2022a; Yang et al., 2018). In the realm of multi-hop QA, the approach to iteratively access both LLMs and the retrieval module is generally employed. Specifically, Khattab et al. (2022), Press et al. (2023), Pereira et al. (2023) and Khot et al. (2023) proposed to first decompose the multi-hop queries into simpler single-hop queries, repeatedly access the LLMs and retriever to solve these sub-queries, and merge their solutions to formulate a complete answer. In contrast
|
| 48 |
+
|
| 49 |
+
to this decomposition-based approach, other recent studies, such as Yao et al. (2023) and Trivedi et al. (2023), explored the interleaving of Chain-of-Thought reasoning (Wei et al., 2022b) — a method where a logical sequence of thoughts is generated — with document retrieval, repeatedly applying this process until the reasoning chain generates the answer. In addition, Jiang et al. (2023) introduced an approach to repeatedly retrieving new documents if the tokens within generated sentences have low confidence. However, the aforementioned methods overlooked the fact that, in real-world scenarios, queries are of a wide variety of complexities. Therefore, it would be largely inefficient to iteratively access LLMs and retrievers for every query, which might be simple enough with a single retrieval step or even only with an LLM itself.
|
| 50 |
+
|
| 51 |
+
Adaptive Retrieval To handle queries of varying complexities, the adaptive retrieval strategy aims to dynamically decide whether to retrieve documents or not, based on each query's complexity. In this vein, Mallen et al. (2023) proposed to decide the query's complexity level based on the frequency of its entities and suggested using the retrieval modules only when the frequency falls below a certain threshold. However, this approach, focusing solely on the binary decision of whether to retrieve or not, may not be sufficient for more complex queries that require multiple reasoning steps. Additionally, Qi et al. (2021) proposed an approach that performs a fixed set of operations (retrieving, reading, and reranking) multiple times until the answer is derived for the given query, which is built upon traditional BERT-like LMs. However, unlike our Adaptive-RAG which pre-determines the query complexity and adapts the operational behavior of any off-the-shelf LLMs accordingly, this approach applies the same fixed operations to every query regardless of its complexity but also necessitates additional specific training to LMs. Concurrent to our work, Asai et al. (2024) suggested training a sophisticated model to dynamically retrieve, critique, and generate the text. Nevertheless, we argue that all the aforementioned adaptive retrieval methods that rely on a single model might be suboptimal in handling a variety of queries of a range of different complexities since they tend to be either overly simple or complex for all the input queries, which demands a new approach that can select the most suitable strategy of retrieval-augmented LLMs tailored to the query complexity.
|
| 52 |
+
|
| 53 |
+
# 3 Method
|
| 54 |
+
|
| 55 |
+
In this section, we describe our approach to adapting retrieval-augmented LLMs, by pre-determining the query complexity and then selecting the most fitting strategies for retrieval-augmented LLMs.
|
| 56 |
+
|
| 57 |
+
# 3.1 Preliminaries
|
| 58 |
+
|
| 59 |
+
We begin with preliminaries, formally introducing different strategies of retrieval-augmented LLMs.
|
| 60 |
+
|
| 61 |
+
Non Retrieval for QA Let us first define an LLM as a model LLM, which takes a sequence of tokens $\pmb{x} = [x_{1}, x_{2}, \dots, x_{n}]$ as an input and then generates a sequence of tokens $\pmb{y} = [y_{1}, y_{2}, \dots, y_{n}]$ as an output, which is formalized as follows: $\pmb{y} = \mathsf{LLM}(\pmb{x})$ . Then, in our problem setup for QA, $\pmb{x}$ and $\pmb{y}$ become the input query $(\pmb{q})$ from the user and the generated answer $(\bar{\pmb{a}})$ from the LLM, respectively: $\pmb{q} = \pmb{x}$ and $\bar{\pmb{a}} = \pmb{y}$ . Also, subsequently, the most naive LLM-powered QA model can be represented as follows: $\bar{\pmb{a}} = \mathsf{LLM}(\pmb{q})$ . Ideally, $\bar{\pmb{a}}$ should match the actual correct answer $\pmb{a}$ . This non-retrieval-based QA method is highly efficient and could be a somewhat promising approach to handling easy queries, as the size of LLMs becomes extremely large with its effect on storing a large amount of knowledge. However, this approach is largely problematic on queries that require precise or concurrent knowledge of specific people, events, or any subjects beyond the LLMs' internal knowledge.
|
| 62 |
+
|
| 63 |
+
Single-step Approach for QA To address the aforementioned scenarios where LLM may struggle with queries that are not answerable by LLM itself, we can utilize the external knowledge $\pmb{d}$ , which includes useful information for queries, retrieved from the external knowledge source $\mathcal{D}$ that could be an encyclopedia (e.g., Wikipedia) consisting of millions of documents. Specifically, to obtain such $\pmb{d}$ from $\mathcal{D}$ , a specific retrieval model is necessary, which returns documents based on their relevance with the given query. This process can be formulated as follows: $\pmb{d} = \text{Retriever}(\pmb{q}; D)$ , where $\text{Retriever}$ is the retrieval model, with $\pmb{d} \in \mathcal{D}$ . Here, we can use any off-the-shelf retriever (Robertson et al., 1994; Karpukhin et al., 2020).
|
| 64 |
+
|
| 65 |
+
After the retrieval step is done, we now have a pair of query $\pmb{q}$ and its relevant documents $\pmb{d}$ . Then, in order to augment LLMs with this retrieved external knowledge, we can incorporate it into the input of LLMs, represented as follows: $\bar{a} = \mathsf{LLM}(q,d)$ .
|
| 66 |
+
|
| 67 |
+
This process allows LLMs to gain access to external information contained in $d$ , which can provide the supplementary context that the internal knowledge of LLM lacks, which can subsequently improve the accuracy and concurrency of LLMs for QA.
|
| 68 |
+
|
| 69 |
+
Multi-step Approach for QA Even though the aforementioned single-step approach offers significant improvements over non-retrieval for $q$ that requires external knowledge, it encounters notable limitations, particularly when dealing with complex queries that necessitate synthesizing information from multiple source documents and reasoning over them. This is where a multi-step approach and reasoning for QA become essential.
|
| 70 |
+
|
| 71 |
+
In this multi-step approach, LLM interacts with Retriever in several rounds, progressively refining its understanding of $\mathbf{q}$ , until it formulates the final answer from findings accumulated across these multiple steps. Specifically, the process begins with the initial query $\mathbf{q}$ , and at every retrieval step $i$ , new documents $\pmb{d}_i$ are retrieved from $\mathcal{D}$ and then incorporated into the input of LLMs, as follows: $\bar{a}_i = \mathsf{LLM}(\pmb{q},\pmb{d}_i,\pmb{c}_i)$ , where the additional context $\pmb{c}_i$ can be composed of previous documents and outcomes $(d_1,d_2,\dots,d_{i - 1},\bar{a}_1,\bar{a}_2,\dots,\bar{a}_{i - 1})$ , and $d_i = \mathrm{Retriever}(q,c_i;D)^1$ . We would like to note that this iterative, multi-step process enables LLM to construct a more comprehensive and extensive foundation to solve queries effectively, specifically adept at complex multi-hop queries where answers depend on interconnected pieces of information. However, it is important to recognize that this multi-step approach can be resource-intensive due to the repeated accesses to Retriever and LLM, which entail substantial computational costs.
|
| 72 |
+
|
| 73 |
+
# 3.2 Adaptive-RAG: Adaptive Retrieval-Augmented Generation
|
| 74 |
+
|
| 75 |
+
We now introduce our adaptive retrieval-augmented LLMs, which are built upon three different strategies described in the previous section, and which are designed to select the most suitable strategy according to the complexity of queries.
|
| 76 |
+
|
| 77 |
+
Adapting Retrieval-Augmented LLMs Note that in real-world scenarios, not all $\mathbf{q}$ from users have the same level of complexity, necessitating
|
| 78 |
+
|
| 79 |
+
tailored strategies for handling each query. In other words, employing the most basic, non-retrieval-based approach $\mathsf{LLM}(q)$ to respond to the complex query $q$ would be also ineffective (Figure 2, A); conversely, using a more elaborate multi-step approach $\mathsf{LLM}(q,d,c)$ for simple $q$ would be inefficient (Figure 2, B). Therefore, our adaptive framework is designed to dynamically adjust the query-handling strategy of retrieval-augmented LLMs, which is achieved by determining the complexity of each query before attempting a solution. Notably, this framework can offer a robust middle ground with a range of solutions, from the simplest approach for the most straightforward queries, to the one-step approach for moderate queries, and up to the most comprehensive and rigorous approach for complex queries. In addition, since the operations of LLM and Retriever remain consistent regardless of inputs to them, our method can seemingly go back and forth across queries of different complexities, without changing the internal model architecture or parameters during adaption.
|
| 80 |
+
|
| 81 |
+
Query Complexity Assessment To operationalize our adaptive retrieval-augmented LLM framework, we should determine the query complexity, and to achieve this, we propose to model a complexity classifier, whose goal is to return the appropriate complexity level of the given query. Specifically, given the query $\mathbf{q}$ , our classifier can be formulated as follows: $o = \text{Classifier}(\mathbf{q})$ , where Classifier is a smaller Language Model that is trained to classify one of three different complexity levels and $o$ is its corresponding class label. In our classifier design, there are three class labels: 'A', 'B', and 'C', where 'A' indicates that $\mathbf{q}$ is straightforward and answerable by LLM(q) itself, 'B' indicates that $\mathbf{q}$ has the moderate complexity where at least a single-step approach LLM(q,d) is needed, and 'C' indicates that $\mathbf{q}$ is complex, requiring the most extensive solution LLM(q,d,c) $^2$ .
|
| 82 |
+
|
| 83 |
+
Training Strategy The remaining step is to train the smaller Language Model for Classifier, to accurately predict its complexity $o$ in response to the given query $\pmb{q}$ . Yet, there is no annotated dataset available for query-complexity pairs. Hence, we propose to automatically construct the training dataset with two particular strategies.
|
| 84 |
+
|
| 85 |
+
To be specific, we first aim at labeling the query
|
| 86 |
+
|
| 87 |
+
complexity based on the results from three different retrieval-augmented LLM strategies, in order to determine the label by its needs. For example, if the simplest non-retrieval-based approach correctly generates the answer, the label for its corresponding query is assigned 'A'. Also, to break the tie between different models in providing the label to the query, we provide a higher priority to a simpler model. In other words, if both single-step and multi-step approaches produce the same correct answer while the non-retrieval-based approach fails, we assign label 'B' to its corresponding query.
|
| 88 |
+
|
| 89 |
+
However, this labeling strategy has a limitation in that not all the queries are assigned labels, since the three retrieval-augmented approaches may all fail to generate the correct answer. On the other hand, the benchmark datasets may already have meaningful inductive biases about the most appropriate retrieval-augmented LLM strategies for their queries, considering the ways they are created (e.g., QA datasets that require sequential reasoning usually necessitate a multi-step approach; while queries of those with labeled single documents can be ideally answerable with the single-step approach). Therefore, for those queries that remain unlabeled after the first labeling step, we assign 'B' to queries in single-hop datasets and 'C' to queries in multi-hop datasets. Finally, we train Classifier with these automatically-collected query-complexity pairs<sup>3</sup>, by using a cross-entropy loss. Then, at inference, we can determine the complexity of the query, which is one of $\{\mathbf{A}', \mathbf{B}', \mathbf{C}'\}$ , by forwarding it to Classifier: $o = \text{Classifier}(\mathbf{q})$ .
|
| 90 |
+
|
| 91 |
+
# 4 Experimental Setups
|
| 92 |
+
|
| 93 |
+
In this section, we explain datasets, models, metrics, and implementation details. We provide additional details in Appendix A.
|
| 94 |
+
|
| 95 |
+
# 4.1 Datasets
|
| 96 |
+
|
| 97 |
+
In order to simulate a realistic scenario, where different queries have varying complexities, we use both the single-hop and multi-hop QA datasets simultaneously, in the unified experimental setting.
|
| 98 |
+
|
| 99 |
+
Single-hop QA For simpler queries, we use three benchmark single-hop QA datasets, which consist
|
| 100 |
+
|
| 101 |
+
Table 1: Averaged results on a collection of benchmark datasets for open-domain question answering including the single-hop and multi-hop queries, with different LLMs. Self-RAG* is trained with a different base LLM, namely LLaMA2 (Touvron et al., 2023); therefore, we compare the results of FLAN-T5-XL (3B) with the results from Self-RAG with LLaMA2 (7B) and the results of others with the results from Self-RAG with LLaMA2 (13B). We emphasize our results in bold, for easy comparisons.
|
| 102 |
+
|
| 103 |
+
<table><tr><td rowspan="2">Types</td><td rowspan="2">Methods</td><td colspan="5">FLAN-T5-XL (3B)</td><td colspan="5">FLAN-T5-XXL (11B)</td><td colspan="5">GPT-3.5 (Turbo)</td></tr><tr><td>EM</td><td>F1</td><td>Acc</td><td>Step</td><td>Time</td><td>EM</td><td>F1</td><td>Acc</td><td>Step</td><td>Time</td><td>EM</td><td>F1</td><td>Acc</td><td>Step</td><td>Time</td></tr><tr><td rowspan="2">Simple</td><td>No Retrieval</td><td>14.87</td><td>21.12</td><td>15.97</td><td>0.00</td><td>0.11</td><td>17.83</td><td>25.14</td><td>19.33</td><td>0.00</td><td>0.08</td><td>35.77</td><td>48.56</td><td>44.27</td><td>0.00</td><td>0.71</td></tr><tr><td>Single-step Approach</td><td>34.83</td><td>44.31</td><td>38.87</td><td>1.00</td><td>1.00</td><td>37.87</td><td>47.63</td><td>41.90</td><td>1.00</td><td>1.00</td><td>34.73</td><td>46.99</td><td>45.27</td><td>1.00</td><td>1.00</td></tr><tr><td rowspan="3">Adaptive</td><td>Adaptive Retrieval</td><td>23.87</td><td>32.24</td><td>26.73</td><td>0.50</td><td>0.56</td><td>26.93</td><td>35.67</td><td>29.73</td><td>0.50</td><td>0.54</td><td>35.90</td><td>48.20</td><td>45.30</td><td>0.50</td><td>0.86</td></tr><tr><td>Self-RAG*</td><td>9.90</td><td>20.79</td><td>31.57</td><td>0.72</td><td>0.43</td><td>10.87</td><td>22.98</td><td>34.13</td><td>0.74</td><td>0.23</td><td>10.87</td><td>22.98</td><td>34.13</td><td>0.74</td><td>1.50</td></tr><tr><td>Adaptive-RAG (Ours)</td><td>37.17</td><td>46.94</td><td>42.10</td><td>2.17</td><td>3.60</td><td>38.90</td><td>48.62</td><td>43.77</td><td>1.35</td><td>2.00</td><td>37.97</td><td>50.91</td><td>48.97</td><td>1.03</td><td>1.46</td></tr><tr><td>Complex</td><td>Multi-step Approach</td><td>39.00</td><td>48.85</td><td>43.70</td><td>4.69</td><td>8.81</td><td>40.13</td><td>50.09</td><td>45.20</td><td>2.13</td><td>3.80</td><td>38.13</td><td>50.87</td><td>49.70</td><td>2.81</td><td>3.33</td></tr><tr><td>Oracle</td><td>Adaptive-RAG w/ Oracle</td><td>45.00</td><td>56.28</td><td>49.90</td><td>1.28</td><td>2.11</td><td>47.17</td><td>58.60</td><td>52.20</td><td>0.84</td><td>1.10</td><td>47.70</td><td>62.80</td><td>58.57</td><td>0.50</td><td>1.03</td></tr></table>
|
| 104 |
+
|
| 105 |
+
of queries and their associated documents containing answers, namely 1) SQuAD v1.1 (Rajpurkar et al., 2016), 2) Natural Questions (Kwiatkowski et al., 2019), and 3) TriviaQA (Joshi et al., 2017).
|
| 106 |
+
|
| 107 |
+
Multi-hop QA To consider more complex query scenarios, we use three benchmark multi-hop QA datasets, which require sequential reasoning over multiple documents, namely 1) MuSiQue (Trivedi et al., 2022a), 2) HotpotQA (Yang et al., 2018), and 3) 2WikiMultiHopQA (Ho et al., 2020).
|
| 108 |
+
|
| 109 |
+
# 4.2 Models
|
| 110 |
+
|
| 111 |
+
We compare our Adaptive-RAG against relevant models, including three retrieval-augmented LLM strategies (in Section 3.1) and the adaptive retrieval approaches (Mallen et al., 2023; Asai et al., 2024), which can be grouped into one of three categories: Simple, Adaptive, and Complex. Specifically, Simple approaches include the 1) No Retrieval and 2) Single-step Approach-based methods. Adaptive approaches include the 3) Adaptive Retrieval (Mallen et al., 2023), 4) Self-RAG (Asai et al., 2024), and our 5) Adaptive-RAG, which can adaptively perform retrieval based on the question complexity. For the 6) Multi-step Approach, we use the most sophisticated state-of-the-art method (Trivedi et al., 2023), iteratively accessing both the retriever and LLM with Chain-of-Thought reasoning (Wei et al., 2022b), for every query. Note that models across different categories are not directly comparable. Yet, in the ideal setting, Adaptive approaches should be more effective than those in the Simple category while simultaneously being more efficient than the Complex one. Therefore, we also report the performance in an ideal scenario, 7) Adaptive-RAG w/ Oracle, using the oracle classifier with our Adaptive-RAG.
|
| 112 |
+
|
| 113 |
+
# 4.3 Evaluation Metrics
|
| 114 |
+
|
| 115 |
+
When it comes to evaluating adaptive models, it is essential to simultaneously consider both the
|
| 116 |
+
|
| 117 |
+
task performance and efficiency along with their trade-offs. Thus, we report the results with five metrics, where three of them measure the effectiveness and the other two measure the efficiency. In particular, for effectiveness, we use F1, EM, and Accuracy (Acc), following the standard evaluation protocol (Mallen et al., 2023; Baek et al., 2023; Asai et al., 2024), where F1 measures the number of overlapping words between the predicted answer and the ground truth, EM measures whether they are the same, and Acc measures whether the predicted answer contains the ground-truth answer. For efficiency, we measure the number of retrieval-and-generate steps and the average time for answering each query relative to the one-step approach.
|
| 118 |
+
|
| 119 |
+
# 4.4 Implementation Details
|
| 120 |
+
|
| 121 |
+
For a fair comparison and following Mallen et al. (2023) and Trivedi et al. (2023), we use the same retriever, a term-based sparse retrieval model known as BM25 (Robertson et al., 1994), across all different models. For the external document corpus, we use different sources depending on the dataset type: the Wikipedia corpus preprocessed by Karpukhin et al. (2020) for single-hop datasets, and the preprocessed corpus by Trivedi et al. (2023) for multi-hop datasets. Regarding the LLMs that are used to generate answers, we use the FLAN-T5 series models (Chung et al., 2022) of XL with 3B parameters and XXL with 11B parameters, and the GPT-3.5 model (gpt-3.5-turbo-instruct). For the retrieval-augmented LLM design, we follow the implementation details from Trivedi et al. (2023), which include input prompts, instructions, and the number of test samples for evaluation (e.g., 500 samples per dataset). In our Adaptive-RAG, for the query-complexity classifier, we use and train the T5-Large model (Raffel et al., 2020). Specifically, the classifier is trained using the epoch that shows the best performance until 100 training iterations from the validation set, with the learning rate of 3e
|
| 122 |
+
|
| 123 |
+
Table 2: Results on each of a collection of datasets with FLAN-T5-XL (3B) as the LLM. We emphasize our results in bold.
|
| 124 |
+
|
| 125 |
+
<table><tr><td rowspan="2">Data</td><td rowspan="2">Types</td><td rowspan="2">Methods</td><td colspan="5">SQuAD</td><td colspan="5">Natural Questions</td><td colspan="5">TriviaQA</td></tr><tr><td>EM</td><td>F1</td><td>Acc</td><td>Step</td><td>Time</td><td>EM</td><td>F1</td><td>Acc</td><td>Step</td><td>Time</td><td>EM</td><td>F1</td><td>Acc</td><td>Step</td><td>Time</td></tr><tr><td rowspan="7">Single-step</td><td rowspan="2">Simple</td><td>No Retrieval</td><td>3.60</td><td>10.50</td><td>5.00</td><td>0.00</td><td>0.11</td><td>14.20</td><td>19.00</td><td>15.60</td><td>0.00</td><td>0.13</td><td>25.00</td><td>31.80</td><td>27.00</td><td>0.00</td><td>0.13</td></tr><tr><td>Single-step Approach</td><td>27.80</td><td>39.30</td><td>34.00</td><td>1.00</td><td>1.00</td><td>37.80</td><td>47.30</td><td>44.60</td><td>1.00</td><td>1.00</td><td>53.60</td><td>62.40</td><td>60.20</td><td>1.00</td><td>1.00</td></tr><tr><td rowspan="3">Adaptive</td><td>Adaptive Retrieval</td><td>13.40</td><td>23.10</td><td>17.60</td><td>0.50</td><td>0.55</td><td>28.20</td><td>36.00</td><td>33.00</td><td>0.50</td><td>0.56</td><td>38.40</td><td>46.90</td><td>42.60</td><td>0.50</td><td>0.56</td></tr><tr><td>Self-RAG*</td><td>2.20</td><td>11.20</td><td>18.40</td><td>0.63</td><td>0.50</td><td>31.40</td><td>39.00</td><td>33.60</td><td>0.63</td><td>0.17</td><td>12.80</td><td>29.30</td><td>57.00</td><td>0.68</td><td>0.45</td></tr><tr><td>Adaptive-RAG (Ours)</td><td>26.80</td><td>38.30</td><td>33.00</td><td>1.37</td><td>2.02</td><td>37.80</td><td>47.30</td><td>44.60</td><td>1.00</td><td>1.00</td><td>52.20</td><td>60.70</td><td>58.20</td><td>1.23</td><td>1.54</td></tr><tr><td>Complex</td><td>Multi-step Approach</td><td>24.40</td><td>35.60</td><td>29.60</td><td>4.52</td><td>9.03</td><td>38.60</td><td>47.80</td><td>44.20</td><td>5.04</td><td>10.18</td><td>53.80</td><td>62.40</td><td>60.20</td><td>5.28</td><td>9.22</td></tr><tr><td>Oracle</td><td>Adaptive-RAG w/ Oracle</td><td>32.00</td><td>45.60</td><td>38.20</td><td>1.24</td><td>1.60</td><td>47.40</td><td>57.10</td><td>53.60</td><td>1.10</td><td>1.55</td><td>61.60</td><td>70.20</td><td>66.40</td><td>0.79</td><td>1.10</td></tr></table>
|
| 126 |
+
|
| 127 |
+
<table><tr><td rowspan="2">Data</td><td rowspan="2">Types</td><td rowspan="2">Methods</td><td colspan="5">MuSiQue</td><td colspan="5">HotpotQA</td><td colspan="5">2WikiMultiHopQA</td></tr><tr><td>EM</td><td>F1</td><td>Acc</td><td>Step</td><td>Time</td><td>EM</td><td>F1</td><td>Acc</td><td>Step</td><td>Time</td><td>EM</td><td>F1</td><td>Acc</td><td>Step</td><td>Time</td></tr><tr><td rowspan="7">Multi-step</td><td rowspan="2">Simple</td><td>No Retrieval</td><td>2.40</td><td>10.70</td><td>3.20</td><td>0.00</td><td>0.11</td><td>16.60</td><td>22.71</td><td>17.20</td><td>0.00</td><td>0.11</td><td>27.40</td><td>32.04</td><td>27.80</td><td>0.00</td><td>0.10</td></tr><tr><td>Single-step Approach</td><td>13.80</td><td>22.80</td><td>15.20</td><td>1.00</td><td>1.00</td><td>34.40</td><td>46.15</td><td>36.40</td><td>1.00</td><td>1.00</td><td>41.60</td><td>47.90</td><td>42.80</td><td>1.00</td><td>1.00</td></tr><tr><td rowspan="3">Adaptive</td><td>Adaptive Retrieval</td><td>6.40</td><td>15.80</td><td>8.00</td><td>0.50</td><td>0.55</td><td>23.60</td><td>32.22</td><td>25.00</td><td>0.50</td><td>0.55</td><td>33.20</td><td>39.44</td><td>34.20</td><td>0.50</td><td>0.55</td></tr><tr><td>Self-RAG*</td><td>1.60</td><td>8.10</td><td>12.00</td><td>0.73</td><td>0.51</td><td>6.80</td><td>17.53</td><td>29.60</td><td>0.73</td><td>0.45</td><td>4.60</td><td>19.59</td><td>38.80</td><td>0.93</td><td>0.49</td></tr><tr><td>Adaptive-RAG (Ours)</td><td>23.60</td><td>31.80</td><td>26.00</td><td>3.22</td><td>6.61</td><td>42.00</td><td>53.82</td><td>44.40</td><td>3.55</td><td>5.99</td><td>40.60</td><td>49.75</td><td>46.40</td><td>2.63</td><td>4.68</td></tr><tr><td>Complex</td><td>Multi-step Approach</td><td>23.00</td><td>31.90</td><td>25.80</td><td>3.60</td><td>7.58</td><td>44.60</td><td>56.54</td><td>47.00</td><td>5.53</td><td>9.38</td><td>49.60</td><td>58.85</td><td>55.40</td><td>4.17</td><td>7.37</td></tr><tr><td>Oracle</td><td>Adaptive-RAG w/ Oracle</td><td>24.80</td><td>38.50</td><td>27.00</td><td>1.98</td><td>3.99</td><td>51.20</td><td>64.00</td><td>54.80</td><td>1.59</td><td>2.77</td><td>53.00</td><td>62.30</td><td>59.40</td><td>1.01</td><td>1.69</td></tr></table>
|
| 128 |
+
|
| 129 |
+

|
| 130 |
+
Figure 3: Performance on QA and query-complexity assessment of different adaptive approaches for retrieval-augmented LLMs with FLAN-T5 XL (Left) and XXL (Center). For labeling the complexity of queries, we use the silver data annotated from the prediction outcomes of models (described in Section 3.2). We also provide the confusion matrix across three labels (Right).
|
| 131 |
+
|
| 132 |
+

|
| 133 |
+
|
| 134 |
+

|
| 135 |
+
|
| 136 |
+
5 and the AdamW (Loshchilov and Hutter, 2019) as an optimizer. Regarding its training data, we sample and annotate 400 queries from 6 datasets based on its inductive bias (single-hop for one-step approach and multi-hop for multi-step). In addition, we use predicted outcomes of three different strategies over 400 queries sampled from each dataset. Note that those queries used for classifier training do not overlap with the testing queries for QA.
|
| 137 |
+
|
| 138 |
+
# 5 Experimental Results and Analyses
|
| 139 |
+
|
| 140 |
+
In this section, we show the overall experimental results and offer in-depth analyses of our method.
|
| 141 |
+
|
| 142 |
+
Main Results First of all, Table 1 shows our main results averaged over all considered datasets, which corroborate our hypothesis that simple retrieval-augmented strategies are less effective than the complex strategy, while the complex one is significantly more expensive than the simple ones. In addition, we report the more granular results with FLAN-T5-XL on each of the single-hop and multi-hop datasets in Table 2 (and more with different LLMs in Table 7 and Table 8 of Appendix), which are consistent with the results observed in Table 1.
|
| 143 |
+
|
| 144 |
+
However, in a real-world scenario, not all users ask queries with the same level of complexity, which emphasizes the importance of the need for adaptive strategies. Note that among the adaptive strategies, our Adaptive-RAG shows remarkable
|
| 145 |
+
|
| 146 |
+
effectiveness over the competitors (Table 1). This indicates that merely focusing on the decision of whether to retrieve or not is suboptimal. Also, as shown in Table 2, such simple adaptive strategies are particularly inadequate for handling complex queries in multi-hop datasets, which require aggregated information and reasoning over multiple documents. Meanwhile, our approach can consider a more fine-grained query handling strategy by further incorporating an iterative module for complex queries. Furthermore, in a realistic setting, we should take into account not only effectiveness but also efficiency. As shown in Table 1, compared to the complex multi-step strategy, our proposed adaptive strategy is significantly more efficient across all model sizes. This is meaningful in this era of LLMs, where the cost of accessing them is a critical factor for practical applications and scalability. Finally, to see the upper bound of our Adaptive-RAG, we report its performances with the oracle classifier where the classification performance is perfect. As shown in Table 1 and Table 2, we observe that it achieves the best performance while being much more efficient than our Adaptive-RAG without the oracle classifier. These results support the validity and significance of our proposal for adapting retrieval-augmented LLM strategies based on query complexity, and further suggest the direction to develop more improved classifiers to achieve optimal performance.
|
| 147 |
+
|
| 148 |
+
Table 3: The exact elapsed time per query and the percentage of the predicted labels from the classifier over all samples.
|
| 149 |
+
|
| 150 |
+
<table><tr><td>Labels</td><td>Time/Query (Sec.)</td><td>Percentage (%)</td></tr><tr><td>No (A)</td><td>0.35</td><td>8.60</td></tr><tr><td>One (B)</td><td>3.08</td><td>53.33</td></tr><tr><td>Multi (C)</td><td>27.18</td><td>38.07</td></tr></table>
|
| 151 |
+
|
| 152 |
+
Classifier Performance To understand how the proposed classifier works, we analyze its performance across different complexity labels. As Figure 3 (Left and Center) shows, the classification accuracy of our Adaptive-RAG is better than those of the other adaptive retrieval baselines, which leads to overall QA performance improvements. In other words, this result indicates that our Adaptive-RAG is capable of more accurately classifying the complexity levels with various granularities, which include not performing retrieval, performing retrieval only once, and performing retrieval multiple times. In addition to the true positive performance of our classifier averaged over all those three labels in Figure 3 (Left and Center), we further report its confusion matrix in Figure 3 (Right). We note that the confusion matrix reveals some notable trends: 'C (Multi)' is sometimes misclassified as 'B (One)' (about $31\%$ ) and 'B (One)' as 'C (Multi)' (about $23\%$ ); 'A (No)' is misclassified often as 'B (One)' (about $47\%$ ) and less frequently as 'C (Multi)' (about $22\%$ ). While the overall results in Figure 3 show that our classifier effectively categorizes the three labels, further refining it based on such misclassification would be a meaningful direction for future work.
|
| 153 |
+
|
| 154 |
+
Analyses on Efficiency for Classifier While Table 1 shows the relative elapsed time for each of the three different RAG strategies, we further provide the exact elapsed time per query for our AdaptiveRAG and the distribution for predicted labels from our query-complexity classifier in Table 3. Similar to the results of the elapsed time in Table 1 (relative time), Table 3 (exact time) shows that efficiency can be substantially improved by identifying simple or straightforward queries.
|
| 155 |
+
|
| 156 |
+
Analyses on Training Data for Classifier We have shown that the classifier plays an important role in adaptive retrieval. Here, we further analyze the different strategies for training the classifier by ablating our full training strategy, which includes two approaches: generating silver data from predicted outcomes of models and utilizing inductive
|
| 157 |
+
|
| 158 |
+
Table 4: Results on QA and complexity classification with varying the data annotation strategies for training the classifier.
|
| 159 |
+
|
| 160 |
+
<table><tr><td rowspan="2">Training Strategies</td><td colspan="2">QA</td><td colspan="4">Classifier (Accuracy)</td></tr><tr><td>F1</td><td>Step</td><td>All</td><td>No</td><td>One</td><td>Multi</td></tr><tr><td>Adaptive-RAG (Ours)</td><td>46.94</td><td>1084</td><td>54.52</td><td>30.52</td><td>66.28</td><td>65.45</td></tr><tr><td>w/o Binary</td><td>43.43</td><td>640</td><td>60.30</td><td>62.19</td><td>65.70</td><td>39.55</td></tr><tr><td>w/o Silver</td><td>48.79</td><td>1464</td><td>40.00</td><td>0.00</td><td>53.98</td><td>75.91</td></tr></table>
|
| 161 |
+
|
| 162 |
+
bias in datasets (see Section 3.2). As Table 4 shows, compared to the training strategy relying solely on the data derived from inductive bias, ours is significantly more efficient. This efficiency is partly because ours also takes into account the case that does not consider any documents at all, as also implied by the classification accuracy; meanwhile, queries in the existing datasets do not capture the information on whether the retrieval is required or not. On the other hand, in the case of only using the silver data annotated from the correct predictions, while its overall classification accuracy is high, the overall QA performance implies that relying on the silver data may not be optimal. This may be because this silver data does not cover complexity labels over incorrectly predicted queries, which leads to lower generalization effect on queries relevant to them. Meanwhile, by also incorporating complexity labels from dataset bias (single-hop vs multi-hop), the classifier becomes more accurate in predicting multi-hop queries, leading to the better performance. It is worth noting that our automatic labeling strategies are two particular instantiations for training the classifier, and that there could be other instantiations, which we leave as future work.
|
| 163 |
+
|
| 164 |
+
Analyses on Classifier Size To investigate the sensitivity of our classifier according to its varying sizes, we conducted further experiments. As shown in Table 6, we observe no significant performance differences among classifiers of various sizes, even with reduced complexity and fewer parameters in smaller classifiers. This indicates that our proposed classifier can contribute to resource-efficient settings in real-use cases with smaller sizes without compromising the performance.
|
| 165 |
+
|
| 166 |
+
Case Study We conduct a case study to qualitatively compare our Adaptive-RAG against Adaptive Retrieval. Table 5 shows the classified complexity and the query handling patterns for both simple and complex questions. First, for the simple single-hop question, our Adaptive-RAG identifies that it is answerable by only using the LLM's parametric knowledge about 'Google'. By contrast,
|
| 167 |
+
|
| 168 |
+
Table 5: Case study with GPT-3.5 (Turbo), where we represent the factual error in red and the accurate information in blue.
|
| 169 |
+
|
| 170 |
+
<table><tr><td>Dataset</td><td>Question</td><td>Adaptive Retrieval</td><td>Adaptive-RAG (Ours)</td></tr><tr><td>NQ (Single-hop)</td><td>Which famous corporate logo changed to a flat colour/color sans serif font in its first major change since 1999?</td><td>Query Type: B (Single-step Approach) Reasoning: The famous corporate logo that changed to a flat color sans-serif font in its first major change since 1999 is Microsoft. Microsoft changed its logo in 2012, replacing the italicized, multi-colored font with a simple, flat, grey font. So the answer is: Microsoft.</td><td>Query Type: A (Non Retrieval) Reasoning: The famous corporate logo that changed to a flat color sans serif font in its first major change since 1999 is Google. Google changed its logo in 2015. So the answer is: Google.</td></tr><tr><td>MuSiQue (Multi-hop)</td><td>Who is the child of the Italian navigator who explored the eastern coast of the continent César Gaytan was born in for the English?</td><td>Query Type: A (Non Retrieval) Reasoning: César Gaytan was born in the continent of North America. The Italian navigator who explored the eastern coast of North America for the English was Giovanni Caboto, also known as John Cabot. So the answer is: Giovanni Caboto/John Cabot.</td><td>Query Type: C (Multi-step Approach) Reasoning: Césarr Gaytan was born in Guadalajara, Jalisco, Mexico. The Italian navigator who explored the eastern coast of the continent for the English is John Cabot. John Cabot's son is Sebastian Cabot. So the answer is: Sebastian Cabot.</td></tr></table>
|
| 171 |
+
|
| 172 |
+
Table 6: Results with varying model sizes for classifiers.
|
| 173 |
+
|
| 174 |
+
<table><tr><td rowspan="2">Sizes</td><td colspan="2">QA</td><td colspan="4">Classifier (Accuracy)</td></tr><tr><td>F1</td><td>Step</td><td>All</td><td>No</td><td>One</td><td>Multi</td></tr><tr><td>Small (60M)</td><td>45.83</td><td>964</td><td>53.48</td><td>26.65</td><td>70.62</td><td>53.18</td></tr><tr><td>Base (223M)</td><td>45.97</td><td>983</td><td>53.41</td><td>26.42</td><td>69.46</td><td>56.82</td></tr><tr><td>Large (770M)</td><td>46.94</td><td>1084</td><td>54.52</td><td>30.52</td><td>66.28</td><td>65.45</td></tr></table>
|
| 175 |
+
|
| 176 |
+
Adaptive Retrieval fetches additional documents, leading to longer processing times and occasionally producing incorrect responses due to the inclusion of partially irrelevant information about 'Microsoft'. Meanwhile, faced with a complex question, Adaptive-RAG seeks out relevant information, including details like 'a son of John Cabot', which may not have been stored in LLMs, while Adaptive Retrieval fails to request such information from external sources, resulting in inaccurate answers.
|
| 177 |
+
|
| 178 |
+
# 6 Conclusion
|
| 179 |
+
|
| 180 |
+
In this work, we proposed the Adaptive Retrieval-Augmented Generation framework, referred to as Adaptive-RAG, to handle queries of various complexities. Specifically, Adaptive-RAG is designed to dynamically adjust its query handling strategies in the unified retrieval-augmented LLM based on the complexity of queries that they encounter, which spans across a spectrum of the nonretrieval-based approach for the most straightforward queries, to the single-step approach for the queries of moderate complexity, and finally to the multi-step approach for the complex queries. The core step of our Adaptive-RAG lies in determining the complexity of the given query, which is instrumental in selecting the most suitable strategy for its answer. To operationalize this process, we trained a smaller Language Model with query-complexity pairs, which are automatically annotated from the predicted outcomes and the inductive biases in datasets. We validated our Adaptive-RAG
|
| 181 |
+
|
| 182 |
+
on a collection of open-domain QA datasets, covering the multiple query complexities including both the single- and multi-hop questions. The results demonstrate that our Adaptive-RAG enhances the overall accuracy and efficiency of QA systems, allocating more resources to handle complex queries while efficiently handling simpler queries, compared to the existing one-size-fits-all approaches that tend to be either minimalist or maximalist over varying query complexities.
|
| 183 |
+
|
| 184 |
+
# Limitations
|
| 185 |
+
|
| 186 |
+
While our Adaptive-RAG shows clear advantages in effectiveness and efficiency by determining the query complexity and then leveraging the most suitable approach for tackling it, it is important to recognize that there still exist potential avenues for improving the classifier from the perspectives of its training datasets and architecture. Specifically, as there are no available datasets for training the query-complexity classifier, we automatically create new data based on the model prediction outcomes and the inductive dataset biases. However, our labeling process is one specific instantiation of labeling the query complexity, and it may have the potential to label queries incorrectly despite its effectiveness. Therefore, future work may create new datasets that are annotated with a diverse range of query complexities, in addition to the labels of question-answer pairs. Also, as the performance gap between the ideal classifier in Table 1 and the current classifier in Figure 3 indicates, there is still room to improve the effectiveness of the classifier. In other words, our classifier design based on the smaller LM is the initial, simplest instantiation for classifying the query complexity, and based upon it, future work may improve the classifier architecture and its performance, which will positively contribute to the overall QA performance.
|
| 187 |
+
|
| 188 |
+
# Ethics Statement
|
| 189 |
+
|
| 190 |
+
The experimental results on Adaptive-RAG validate its applicability in realistic scenarios, where a wide range of diverse user queries exist. Nonetheless, given the potential diversity of real-world user inputs, it is crucial to also consider scenarios where these inputs might be offensive or harmful. We should be aware that such inputs could lead to the retrieval of offensive documents and the generation of inappropriate responses by the retrieval-augmented LLMs. To address this challenge, developing methods to detect and manage offensive or inappropriate content in both user inputs and retrieved documents within the retrieval-augmented framework is essential. We believe that this is a critical area for future work.
|
| 191 |
+
|
| 192 |
+
# Acknowledgements
|
| 193 |
+
|
| 194 |
+
This work was supported by Institute for Information and communications Technology Promotion (IITP) grant funded by the Korea government (No. 2018-0-00582, Prediction and augmentation of the credibility distribution via linguistic analysis and automated evidence document collection), Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (RS-2023-00275747), and the Artificial intelligence industrial convergence cluster development project funded by the Ministry of Science and ICT (MSIT, Korea) & Gwangju Metropolitan City.
|
| 195 |
+
|
| 196 |
+
# References
|
| 197 |
+
|
| 198 |
+
Rohan Anil, Andrew M. Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, Eric Chu, Jonathan H. Clark, Laurent El Shafey, Yanping Huang, Kathy Meier-Hellstern, Gaurav Mishra, Erica Moreira, Mark Omernick, Kevin Robinson, Sebastian Ruder, Yi Tay, Kefan Xiao, Yuanzhong Xu, Yujing Zhang, Gustavo Hernandez Abrego, Junwhan Ahn, Jacob Austin, Paul Barham, Jan A. Botha, James Bradbury, Siddhartha Brahma, Kevin Brooks, Michele Catasta, Yong Cheng, Colin Cherry, Christopher A. Choquette-Choo, Aakanksha Chowdhery, Clement Crepy, Shachi Dave, Mostafa Dehghani, Sunipa Dev, Jacob Devlin, Mark Diaz, Nan Du, Ethan Dyer, Vladimir Feinberg, Fangxiaoyu Feng, Vlad Fienber, Markus Freitag, Xavier Garcia, Sebastian Gehrmann, Lucas Gonzalez, and et al. 2023. Palm 2 technical report. arXiv preprint arXiv:2305.10403.
|
| 199 |
+
|
| 200 |
+
Akari Asai, Zeqiu Wu, Yizhong Wang, Avirup Sil, and
|
| 201 |
+
|
| 202 |
+
Hannaneh Hajishirzi. 2024. Self-RAG: Learning to retrieve, generate, and critique through self-reflection. In The Twelfth International Conference on Learning Representations.
|
| 203 |
+
|
| 204 |
+
Jinheon Baek, Soyeong Jeong, Minki Kang, Jong Park, and Sung Ju Hwang. 2023. Knowledge-augmented language model verification. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, Singapore, December 6-10, 2023, pages 1720-1736. Association for Computational Linguistics.
|
| 205 |
+
|
| 206 |
+
Sebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Millican, George van den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, Diego de Las Casas, Aurelia Guy, Jacob Menick, Roman Ring, Tom Hennigan, Saffron Huang, Loren Maggiore, Chris Jones, Albin Cassirer, Andy Brock, Michela Paganini, Geoffrey Irving, Oriol Vinyals, Simon Osindero, Karen Simonyan, Jack W. Rae, Erich Elsen, and Laurent Sifre. 2022. Improving language models by retrieving from trillions of tokens. In International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of Proceedings of Machine Learning Research, pages 2206-2240. PMLR.
|
| 207 |
+
|
| 208 |
+
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.
|
| 209 |
+
|
| 210 |
+
Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading wikipedia to answer open-domain questions. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, pages 1870-1879. Association for Computational Linguistics.
|
| 211 |
+
|
| 212 |
+
Sukmin Cho, Jeongyeon Seo, Soyeong Jeong, and Jong C. Park. 2023. Improving zero-shot reader by reducing distractions from irrelevant documents in open-domain question answering. In *Findings of the Association for Computational Linguistics: EMNLP* 2023, Singapore, December 6-10, 2023, pages 3145-3157. Association for Computational Linguistics.
|
| 213 |
+
|
| 214 |
+
Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan
|
| 215 |
+
|
| 216 |
+
Narang, Gaurav Mishra, Adams Yu, Vincent Y. Zhao, Yanping Huang, Andrew M. Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei. 2022. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416.
|
| 217 |
+
Xanh Ho, Anh-Khoa Duong Nguyen, Saku Sugawara, and Akiko Aizawa. 2020. Constructing A multi-hop QA dataset for comprehensive evaluation of reasoning steps. In Proceedings of the 28th International Conference on Computational Linguistics, COLING 2020, Barcelona, Spain (Online), December 8-13, 2020, pages 6609-6625. International Committee on Computational Linguistics.
|
| 218 |
+
Gautier Izacard and Edouard Grave. 2021. Leveraging passage retrieval with generative models for open domain question answering. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, EACL 2021, Online, April 19 - 23, 2021, pages 874-880. Association for Computational Linguistics.
|
| 219 |
+
Gautier Izacard, Patrick S. H. Lewis, Maria Lomeli, Lucas Hosseini, Fabio Petroni, Timo Schick, Jane Dwivedi-Yu, Armand Joulin, Sebastian Riedel, and Edouard Grave. 2023. Atlas: Few-shot learning with retrieval augmented language models. J. Mach. Learn. Res., 24:251:1-251:43.
|
| 220 |
+
Soyeong Jeong, Jinheon Baek, Sukmin Cho, Sung Ju Hwang, and Jong Park. 2023. Test-time self-adaptive small language models for question answering. In Findings of the Association for Computational Linguistics: EMNLP 2023, Singapore, December 6-10, 2023, pages 15459-15469. Association for Computational Linguistics.
|
| 221 |
+
Zhengbao Jiang, Frank F. Xu, Luyu Gao, Zhiqing Sun, Qian Liu, Jane Dwivedi-Yu, Yiming Yang, Jamie Callan, and Graham Neubig. 2023. Active retrieval augmented generation. In EMNLP 2023.
|
| 222 |
+
Mandar Joshi, Eunsol Choi, Daniel S. Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, pages 1601-1611. Association for Computational Linguistics.
|
| 223 |
+
Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick S. H. Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, November 16-20, 2020. Association for Computational Linguistics.
|
| 224 |
+
Jungo Kasai, Keisuke Sakaguchi, Yoichi Takahashi, Ronan Le Bras, Akari Asai, Xinyan Yu, Dragomir R. Radev, Noah A. Smith, Yejin Choi, and Kentaro Inui.
|
| 225 |
+
|
| 226 |
+
2022. Realtime QA: what's the answer right now? arXiv preprint arXiv:2207.13332.
|
| 227 |
+
Omar Khattab, Keshav Santhanam, Xiang Lisa Li, David Hall, Percy Liang, Christopher Potts, and Matei Zaharia. 2022. Demonstrate-search-predict: Composing retrieval and language models for knowledge-intensive NLP. arXiv preprint arXiv.2212.14024, abs/2212.14024.
|
| 228 |
+
Tushar Khot, Harsh Trivedi, Matthew Finlayson, Yao Fu, Kyle Richardson, Peter Clark, and Ashish Sabharwal. 2023. Decomposed prompting: A modular approach for solving complex tasks. In *The Eleventh International Conference on Learning Representations*, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net.
|
| 229 |
+
Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: A benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:452-466.
|
| 230 |
+
Angeliki Lazaridou, Elena Gribovskaya, Wojciech Stokowiec, and Nikolai Grigorev. 2022. Internet-augmented language models through few-shot prompting for open-domain question answering. arXiv preprint arXiv:2203.05115.
|
| 231 |
+
Belinda Z. Li, Sewon Min, Srinivasan Iyer, Yashar Mehdad, and Wen-tau Yih. 2020. Efficient one-pass end-to-end entity linking for questions. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 6433-6441. Association for Computational Linguistics.
|
| 232 |
+
Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net.
|
| 233 |
+
Alex Mallen, Akari Asai, Victor Zhong, Rajarshi Das, Daniel Khashabi, and Hannaneh Hajishirzi. 2023. When not to trust language models: Investigating effectiveness of parametric and non-parametric memories. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023, pages 9802-9822. Association for Computational Linguistics.
|
| 234 |
+
OpenAI. 2023. GPT-4 technical report. arXiv preprint arXiv:2303.08774.
|
| 235 |
+
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Z.
|
| 236 |
+
|
| 237 |
+
Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, pages 8024-8035.
|
| 238 |
+
Jayr Alencar Pereira, Robson do Nascimento Fidalgo, Roberto de Alencar Lotufo, and Rodrigo Frassetto Nogueira. 2023. Visconde: Multi-document QA with GPT-3 and neural reranking. In Advances in Information Retrieval - 45th European Conference on Information Retrieval, ECIR 2023, Dublin, Ireland, April 2-6, 2023, Proceedings, Part II, volume 13981 of Lecture Notes in Computer Science, pages 534-543. Springer.
|
| 239 |
+
Ofir Press, Muru Zhang, Sewon Min, Ludwig Schmidt, Noah A. Smith, and Mike Lewis. 2023. Measuring and narrowing the compositionality gap in language models. In *Findings of the Association for Computational Linguistics: EMNLP* 2023.
|
| 240 |
+
Peng Qi, Haejun Lee, Tg Sido, and Christopher D. Manning. 2021. Answering open-domain questions of varying reasoning steps from text. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pages 3599-3614. Association for Computational Linguistics.
|
| 241 |
+
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21:140:1-140:67.
|
| 242 |
+
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100, 000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 2383-2392. The Association for Computational Linguistics.
|
| 243 |
+
Ori Ram, Yoav Levine, Itay Dalmedigos, Dor Muhlgay, Amnon Shashua, Kevin Leyton-Brown, and Yoav Shoham. 2023. In-context retrieval-augmented language models. Transactions of the Association for Computational Linguistics.
|
| 244 |
+
Stephen E. Robertson, Steve Walker, Susan Jones, Micheline Hancock-Beaulieu, and Mike Gatford. 1994. Okapi at TREC-3. In Proceedings of The Third Text Retrieval Conference, TREC 1994, Gaithersburg, Maryland, USA, November 2-4, 1994, volume 500-225 of NIST Special Publication, pages 109-126. National Institute of Standards and Technology (NIST).
|
| 245 |
+
Weijia Shi, Sewon Min, Michihiro Yasunaga, Minjoon Seo, Rich James, Mike Lewis, Luke Zettlemoyer, and Wen-tau Yih. 2023. REPLUG: retrieval
|
| 246 |
+
|
| 247 |
+
augmented black-box language models. arXiv preprint arXiv:2301.12652.
|
| 248 |
+
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton-Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023. Llama 2: Open foundation and finetuned chat models. arXiv preprint arXiv:2307.09288.
|
| 249 |
+
Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, and Ashish Sabharwal. 2022a. Musique: Multi-hop questions via single-hop question composition. Trans. Assoc. Comput. Linguistics, 10:539-554.
|
| 250 |
+
Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, and Ashish Sabharwal. 2022b. MuSiQue: Multi-hop questions via single-hop question composition. Transactions of the Association for Computational Linguistics, 10:539-554.
|
| 251 |
+
Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, and Ashish Sabharwal. 2023. Interleaving retrieval with chain-of-thought reasoning for knowledge-intensive multi-step questions. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023, pages 10014-10037. Association for Computational Linguistics.
|
| 252 |
+
Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, and William Fedus. 2022a. Emergent abilities of large language models. Trans. Mach. Learn. Res., 2022.
|
| 253 |
+
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed H. Chi, Quoc V. Le, and Denny Zhou. 2022b. Chain-of-thought prompting elicits reasoning in large language models. In NeurIPS.
|
| 254 |
+
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz,
|
| 255 |
+
|
| 256 |
+
Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, EMNLP 2020 - Demos, pages 38-45. Association for Computational Linguistics.
|
| 257 |
+
Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul N. Bennett, Junaid Ahmed, and Arnold Overwijk. 2021. Approximate nearest neighbor negative contrastive learning for dense text retrieval. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net.
|
| 258 |
+
Wei Yang, Yuqing Xie, Aileen Lin, Xingyu Li, Luchen Tan, Kun Xiong, Ming Li, and Jimmy Lin. 2019. End-to-end open-domain question answering with bertserini. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Demonstrations, pages 72-77. Association for Computational Linguistics.
|
| 259 |
+
Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018. HotpotQA: A dataset for diverse, explainable multi-hop question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2369-2380, Brussels, Belgium. Association for Computational Linguistics.
|
| 260 |
+
Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik R. Narasimhan, and Yuan Cao. 2023. React: Synergizing reasoning and acting in language models. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net.
|
| 261 |
+
Xinran Zhao, Hongming Zhang, Xiaoman Pan, Wenlin Yao, Dong Yu, and Jianshu Chen. 2023. Thrust: Adaptively propels large language models with external knowledge. In Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023.
|
| 262 |
+
Fengbin Zhu, Wenqiang Lei, Chao Wang, Jianming Zheng, Soujanya Poria, and Tat-Seng Chua. 2021. Retrieving and reading: A comprehensive survey on open-domain question answering. arXiv preprint arXiv:2101.00774.
|
| 263 |
+
|
| 264 |
+

|
| 265 |
+
Figure 4: QA performance (F1) and efficiency (Time/Query) for different retrieval-augmented generation approaches. We use the FLAN-T5-XL (3B) as the base LLM.
|
| 266 |
+
|
| 267 |
+
# A Additional Experimental Setups
|
| 268 |
+
|
| 269 |
+
# A.1 Datasets
|
| 270 |
+
|
| 271 |
+
We use publicly open datasets for both single-hop and multi-hop QA datasets, referring to as Karpukhin et al. (2020) and Trivedi et al. (2023), respectively. We describe the characteristics of each dataset:
|
| 272 |
+
|
| 273 |
+
1) SQuAD v1.1 (Rajpurkar et al., 2016) is created through a process where annotators write questions based on the documents they read.
|
| 274 |
+
2) Natural Questions (Kwiatkowski et al., 2019) is constructed by real user queries on Google Search.
|
| 275 |
+
3) TriviaQA (Joshi et al., 2017) comprises trivia questions sourced from various quiz websites.
|
| 276 |
+
4) MuSiQue (Trivedi et al., 2022a) is collected by compositing multiple single-hop queries, to form queries spanning 2-4 hops.
|
| 277 |
+
5) HotpotQA (Yang et al., 2018) is constructed by having annotators create questions that link multiple Wikipedia articles.
|
| 278 |
+
6) 2WikiMultiHopQA (Ho et al., 2020) is derived from Wikipedia and its associated knowledge graph path, needing 2-hops.
|
| 279 |
+
|
| 280 |
+
# A.2 Models
|
| 281 |
+
|
| 282 |
+
We describe the details of models as follows:
|
| 283 |
+
|
| 284 |
+
1) No Retrieval. This approach uses only the LLM itself, to generate the answer to the given query.
|
| 285 |
+
2) Single-step Approach. This approach first retrieves the relevant knowledge with the given query from the external knowledge sources and then augments the LLM with this retrieved knowledge to generate the answer, which iterates only once.
|
| 286 |
+
3) Adaptive Retrieval. This baseline (Mallen et al., 2023) adaptively augments the LLM with the retrieval module, only when the entities appearing in queries are less popular. To extract entities, we use the available entity-linking method (Li et al., 2020), namely BLINK, for questions.
|
| 287 |
+
4) Self-RAG. This baseline (Asai et al., 2024)
|
| 288 |
+
|
| 289 |
+

|
| 290 |
+
Figure 5: QA performance (F1) and efficiency (Time/Query) for different retrieval-augmented generation approaches. We use the FLAN-T5-XXL (11B) as the base LLM.
|
| 291 |
+
|
| 292 |
+
trains the LLM to adaptively perform retrieval and generation, where the retrieval is conducted once it predicts the special retrieval token above a certain threshold, and the answer generation follows.
|
| 293 |
+
|
| 294 |
+
5) Adaptive-RAG. This is our model that adaptively selects the retrieval-augmented generation strategy, smoothly oscillating between the non-retrieval, single-step approach, and multi-step approaches<sup>4</sup> without architectural changes, based on the query complexity assessed by the classifier.
|
| 295 |
+
6) Multi-step Approach. This approach (Trivedi et al., 2023) is the multi-step retrieval-augmented LLM, which iteratively accesses both the retriever and LLM with interleaved Chain-of-Thought reasoning (Wei et al., 2022b) repeatedly until it derives the solution or reaches the maximum step number. 7) Adaptive-RAG w/ Oracle This is an ideal scenario of our Adaptive-RAG equipped with an oracle classifier that perfectly categorizes the query complexity.
|
| 296 |
+
|
| 297 |
+
# A.3 Implementation Details
|
| 298 |
+
|
| 299 |
+
For computing resources, we use A100 GPUs with 80GB memory. In addition, due to the significant costs associated with evaluating retrieval-augmented generation models, we perform experiments with a single run. Finally, we implemented models using PyTorch (Paszke et al., 2019) and Transformers library (Wolf et al., 2020).
|
| 300 |
+
|
| 301 |
+
# B Additional Experimental Results
|
| 302 |
+
|
| 303 |
+
Performance vs Time We further provide a comparison of different retrieval-augmented generation approaches with FLAN-T5-XL and FLAN-T5-XXL models in Figure 4 and Figure 5, respectively, in the context of performance and efficiency trade-offs. Similar to the observation made from the GPT-3.5 model in Figure 1, our proposed Adaptive-RAG is significantly more effective as well as efficient.
|
| 304 |
+
|
| 305 |
+
Table 7: Results on each of a collection of datasets with FLAN-T5-XXL (11B) as the LLM. We emphasize our results in bold.
|
| 306 |
+
|
| 307 |
+
<table><tr><td rowspan="2">Data</td><td rowspan="2">Types</td><td rowspan="2">Methods</td><td colspan="5">SQuAD</td><td colspan="5">Natural Questions</td><td colspan="5">TriviaQA</td></tr><tr><td>EM</td><td>F1</td><td>Acc</td><td>Step</td><td>Time</td><td>EM</td><td>F1</td><td>Acc</td><td>Step</td><td>Time</td><td>EM</td><td>F1</td><td>Acc</td><td>Step</td><td>Time</td></tr><tr><td rowspan="7">Single-step</td><td rowspan="2">Simple</td><td>No Retrieval</td><td>7.00</td><td>14.40</td><td>8.40</td><td>0.00</td><td>0.08</td><td>18.80</td><td>25.50</td><td>20.40</td><td>0.00</td><td>0.08</td><td>32.80</td><td>39.20</td><td>35.40</td><td>0.00</td><td>0.08</td></tr><tr><td>Single-step Approach</td><td>28.80</td><td>40.80</td><td>35.00</td><td>1.00</td><td>1.00</td><td>41.40</td><td>51.20</td><td>47.60</td><td>1.00</td><td>1.00</td><td>56.00</td><td>64.70</td><td>61.80</td><td>1.00</td><td>1.00</td></tr><tr><td rowspan="3">Adaptive</td><td>Adaptive Retrieval</td><td>15.60</td><td>25.60</td><td>20.00</td><td>0.50</td><td>0.54</td><td>31.00</td><td>39.70</td><td>35.00</td><td>0.50</td><td>0.54</td><td>44.80</td><td>52.20</td><td>48.60</td><td>0.50</td><td>0.54</td></tr><tr><td>Self-RAG*</td><td>1.60</td><td>11.90</td><td>20.80</td><td>0.59</td><td>0.31</td><td>39.20</td><td>47.10</td><td>42.40</td><td>0.75</td><td>0.09</td><td>14.60</td><td>33.70</td><td>60.20</td><td>0.76</td><td>0.22</td></tr><tr><td>Adaptive-RAG (Ours)</td><td>27.80</td><td>39.80</td><td>34.00</td><td>1.17</td><td>1.50</td><td>41.20</td><td>51.00</td><td>47.40</td><td>1.00</td><td>1.00</td><td>52.00</td><td>60.30</td><td>57.20</td><td>1.03</td><td>1.33</td></tr><tr><td>Complex</td><td>Multi-step Approach</td><td>24.60</td><td>36.90</td><td>30.20</td><td>2.13</td><td>3.83</td><td>39.60</td><td>49.60</td><td>46.40</td><td>2.16</td><td>3.94</td><td>52.60</td><td>61.10</td><td>59.40</td><td>2.17</td><td>4.03</td></tr><tr><td>Oracle</td><td>Adaptive-RAG w/ Oracle</td><td>32.80</td><td>46.90</td><td>38.20</td><td>0.85</td><td>0.94</td><td>51.20</td><td>61.00</td><td>57.00</td><td>0.71</td><td>0.91</td><td>63.40</td><td>71.30</td><td>68.20</td><td>0.51</td><td>0.60</td></tr><tr><td rowspan="2">Data</td><td rowspan="2">Types</td><td rowspan="2">Methods</td><td colspan="5">MuSiQue</td><td colspan="5">HotpotQA</td><td colspan="5">2WikiMultiHopQA</td></tr><tr><td>EM</td><td>F1</td><td>Acc</td><td>Step</td><td>Time</td><td>EM</td><td>F1</td><td>Acc</td><td>Step</td><td>Time</td><td>EM</td><td>F1</td><td>Acc</td><td>Step</td><td>Time</td></tr><tr><td rowspan="7">Multi-step</td><td rowspan="2">Simple</td><td>No Retrieval</td><td>4.20</td><td>13.40</td><td>5.40</td><td>0.00</td><td>0.08</td><td>17.40</td><td>25.44</td><td>18.40</td><td>0.00</td><td>0.09</td><td>26.80</td><td>32.93</td><td>28.00</td><td>0.00</td><td>0.08</td></tr><tr><td>Single-step Approach</td><td>16.80</td><td>25.70</td><td>19.20</td><td>1.00</td><td>1.00</td><td>37.60</td><td>49.27</td><td>39.60</td><td>1.00</td><td>1.00</td><td>46.60</td><td>54.13</td><td>48.20</td><td>1.00</td><td>1.00</td></tr><tr><td rowspan="3">Adaptive</td><td>Adaptive Retrieval</td><td>8.40</td><td>17.80</td><td>10.20</td><td>0.50</td><td>0.54</td><td>26.60</td><td>36.01</td><td>27.80</td><td>0.50</td><td>0.54</td><td>35.20</td><td>42.68</td><td>36.80</td><td>0.50</td><td>0.54</td></tr><tr><td>Self-RAG*</td><td>1.20</td><td>8.20</td><td>11.80</td><td>0.68</td><td>0.27</td><td>5.60</td><td>17.86</td><td>30.60</td><td>0.76</td><td>0.26</td><td>3.00</td><td>19.14</td><td>39.00</td><td>0.90</td><td>0.25</td></tr><tr><td>Adaptive-RAG (Ours)</td><td>20.60</td><td>28.50</td><td>23.20</td><td>1.89</td><td>3.12</td><td>44.20</td><td>54.78</td><td>46.80</td><td>1.58</td><td>2.53</td><td>47.60</td><td>57.36</td><td>54.00</td><td>1.46</td><td>2.55</td></tr><tr><td>Complex</td><td>Multi-step Approach</td><td>19.40</td><td>27.50</td><td>21.80</td><td>2.09</td><td>3.66</td><td>47.00</td><td>57.81</td><td>49.40</td><td>2.08</td><td>3.73</td><td>57.60</td><td>67.65</td><td>64.00</td><td>2.17</td><td>3.63</td></tr><tr><td>Oracle</td><td>Adaptive-RAG w/ Oracle</td><td>24.20</td><td>37.20</td><td>26.60</td><td>1.22</td><td>1.71</td><td>52.20</td><td>64.80</td><td>54.60</td><td>0.92</td><td>1.33</td><td>59.20</td><td>70.40</td><td>68.60</td><td>0.82</td><td>1.14</td></tr></table>
|
| 308 |
+
|
| 309 |
+
Table 8: Results on each of a collection of datasets with GPT-3.5 (Turbo) as the LLM. We emphasize our results in bold.
|
| 310 |
+
|
| 311 |
+
<table><tr><td rowspan="2">Data</td><td rowspan="2">Types</td><td rowspan="2">Methods</td><td colspan="5">SQuAD</td><td colspan="5">Natural Questions</td><td colspan="5">TriviaQA</td></tr><tr><td>EM</td><td>F1</td><td>Acc</td><td>Step</td><td>Time</td><td>EM</td><td>F1</td><td>Acc</td><td>Step</td><td>Time</td><td>EM</td><td>F1</td><td>Acc</td><td>Step</td><td>Time</td></tr><tr><td rowspan="7">Single-step</td><td rowspan="2">Simple</td><td>No Retrieval</td><td>16.00</td><td>29.20</td><td>23.80</td><td>0.00</td><td>0.62</td><td>39.80</td><td>55.70</td><td>55.00</td><td>0.00</td><td>0.56</td><td>64.00</td><td>75.60</td><td>75.80</td><td>0.00</td><td>0.68</td></tr><tr><td>Single-step Approach</td><td>18.00</td><td>33.80</td><td>29.20</td><td>1.00</td><td>1.00</td><td>32.40</td><td>46.80</td><td>54.80</td><td>1.00</td><td>1.00</td><td>55.20</td><td>66.50</td><td>65.80</td><td>1.00</td><td>1.00</td></tr><tr><td rowspan="3">Adaptive</td><td>Adaptive Retrieval</td><td>15.40</td><td>30.00</td><td>24.40</td><td>0.50</td><td>0.81</td><td>36.40</td><td>51.20</td><td>56.60</td><td>0.50</td><td>0.78</td><td>62.00</td><td>71.90</td><td>72.20</td><td>0.50</td><td>0.84</td></tr><tr><td>Self-RAG*</td><td>1.60</td><td>11.90</td><td>20.80</td><td>0.59</td><td>1.91</td><td>39.20</td><td>47.10</td><td>42.40</td><td>0.75</td><td>0.52</td><td>14.60</td><td>33.70</td><td>60.20</td><td>0.76</td><td>1.59</td></tr><tr><td>Adaptive-RAG (Ours)</td><td>19.80</td><td>34.40</td><td>30.00</td><td>0.87</td><td>1.21</td><td>36.80</td><td>52.00</td><td>56.60</td><td>0.68</td><td>0.86</td><td>62.40</td><td>73.80</td><td>73.80</td><td>0.22</td><td>0.79</td></tr><tr><td>Complex</td><td>Multi-step Approach</td><td>17.40</td><td>31.50</td><td>26.20</td><td>2.50</td><td>3.24</td><td>35.60</td><td>49.70</td><td>57.80</td><td>2.58</td><td>3.79</td><td>54.80</td><td>67.10</td><td>68.00</td><td>2.30</td><td>2.65</td></tr><tr><td>Oracle</td><td>Adaptive-RAG w/ Oracle</td><td>28.00</td><td>45.90</td><td>39.40</td><td>0.54</td><td>0.93</td><td>50.00</td><td>65.40</td><td>67.00</td><td>0.28</td><td>0.8</td><td>70.80</td><td>81.00</td><td>80.00</td><td>0.11</td><td>0.73</td></tr><tr><td rowspan="2">Data</td><td rowspan="2">Types</td><td rowspan="2">Methods</td><td colspan="5">MuSiQue</td><td colspan="5">HotpotQA</td><td colspan="5">2WikiMultiHopQA</td></tr><tr><td>EM</td><td>F1</td><td>Acc</td><td>Step</td><td>Time</td><td>EM</td><td>F1</td><td>Acc</td><td>Step</td><td>Time</td><td>EM</td><td>F1</td><td>Acc</td><td>Step</td><td>Time</td></tr><tr><td rowspan="7">Multi-step</td><td rowspan="2">Simple</td><td>No Retrieval</td><td>20.40</td><td>31.30</td><td>24.40</td><td>0.00</td><td>0.81</td><td>37.40</td><td>51.04</td><td>43.20</td><td>0.00</td><td>0.74</td><td>37.00</td><td>48.50</td><td>43.40</td><td>0.00</td><td>0.90</td></tr><tr><td>Single-step Approach</td><td>16.40</td><td>26.70</td><td>23.60</td><td>1.00</td><td>1.00</td><td>39.60</td><td>50.44</td><td>45.60</td><td>1.00</td><td>1.00</td><td>46.80</td><td>57.69</td><td>52.60</td><td>1.00</td><td>1.00</td></tr><tr><td rowspan="3">Adaptive</td><td>Adaptive Retrieval</td><td>18.80</td><td>30.30</td><td>24.80</td><td>0.50</td><td>0.90</td><td>38.60</td><td>50.70</td><td>43.20</td><td>0.50</td><td>0.87</td><td>44.20</td><td>55.11</td><td>50.60</td><td>0.50</td><td>0.95</td></tr><tr><td>Self-RAG*</td><td>1.20</td><td>8.20</td><td>11.80</td><td>0.68</td><td>1.66</td><td>5.60</td><td>17.86</td><td>30.60</td><td>0.76</td><td>1.67</td><td>3.00</td><td>19.14</td><td>39.00</td><td>0.90</td><td>1.81</td></tr><tr><td>Adaptive-RAG (Ours)</td><td>21.80</td><td>32.60</td><td>29.60</td><td>1.90</td><td>2.29</td><td>40.40</td><td>52.56</td><td>47.00</td><td>0.93</td><td>1.48</td><td>46.60</td><td>60.09</td><td>56.80</td><td>1.59</td><td>2.23</td></tr><tr><td>Complex</td><td>Multi-step Approach</td><td>23.00</td><td>32.50</td><td>31.60</td><td>3.41</td><td>3.61</td><td>45.80</td><td>58.36</td><td>52.20</td><td>2.73</td><td>3.18</td><td>52.20</td><td>66.08</td><td>62.40</td><td>3.36</td><td>3.35</td></tr><tr><td>Oracle</td><td>Adaptive-RAG w/ Oracle</td><td>29.60</td><td>44.70</td><td>35.60</td><td>0.90</td><td>1.45</td><td>55.60</td><td>69.90</td><td>62.80</td><td>0.54</td><td>1.08</td><td>52.20</td><td>69.90</td><td>66.60</td><td>0.65</td><td>1.21</td></tr></table>
|
| 312 |
+
|
| 313 |
+
Performance per Dataset In addition to detailing the performance of each dataset with the FLAN-T5-XL model, as shown in Table 2, we also present the results for each dataset with the FLAN-T5-XXL and GPT-3.5 models in Table 2 and Table 8, respectively. The experimental results show that our Adaptive-RAG consistently balances between efficiency and accuracy. It is worth noting that while the GPT-3.5 model performs effectively in addressing straightforward queries even without document retrieval, it benefits significantly from our Adaptive-RAG in terms of effectiveness when solving complex multi-hop queries.
|
2024/Adaptive-RAG_ Learning to Adapt Retrieval-Augmented Large Language Models through Question Complexity/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0569b51adc066e691bea46b497c74dcc18d5073f0317da65534b732f4bf1628d
|
| 3 |
+
size 781129
|
2024/Adaptive-RAG_ Learning to Adapt Retrieval-Augmented Large Language Models through Question Complexity/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/Adjusting Interpretable Dimensions in Embedding Space with Human Judgments/3179a1d7-f1eb-4b22-bb06-32f6ebe41c32_content_list.json
ADDED
|
@@ -0,0 +1,1736 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"type": "text",
|
| 4 |
+
"text": "Adjusting Interpretable Dimensions in Embedding Space with Human Judgments",
|
| 5 |
+
"text_level": 1,
|
| 6 |
+
"bbox": [
|
| 7 |
+
203,
|
| 8 |
+
89,
|
| 9 |
+
796,
|
| 10 |
+
130
|
| 11 |
+
],
|
| 12 |
+
"page_idx": 0
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"text": "Katrin Erk",
|
| 17 |
+
"bbox": [
|
| 18 |
+
284,
|
| 19 |
+
158,
|
| 20 |
+
383,
|
| 21 |
+
172
|
| 22 |
+
],
|
| 23 |
+
"page_idx": 0
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"text": "University of Texas at Austin",
|
| 28 |
+
"bbox": [
|
| 29 |
+
215,
|
| 30 |
+
175,
|
| 31 |
+
453,
|
| 32 |
+
190
|
| 33 |
+
],
|
| 34 |
+
"page_idx": 0
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "text",
|
| 38 |
+
"text": "katrin.erk@utexas.edu",
|
| 39 |
+
"bbox": [
|
| 40 |
+
226,
|
| 41 |
+
192,
|
| 42 |
+
440,
|
| 43 |
+
206
|
| 44 |
+
],
|
| 45 |
+
"page_idx": 0
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"type": "text",
|
| 49 |
+
"text": "Marianna Apidianaki",
|
| 50 |
+
"bbox": [
|
| 51 |
+
569,
|
| 52 |
+
158,
|
| 53 |
+
761,
|
| 54 |
+
174
|
| 55 |
+
],
|
| 56 |
+
"page_idx": 0
|
| 57 |
+
},
|
| 58 |
+
{
|
| 59 |
+
"type": "text",
|
| 60 |
+
"text": "University of Pennsylvania",
|
| 61 |
+
"bbox": [
|
| 62 |
+
556,
|
| 63 |
+
175,
|
| 64 |
+
774,
|
| 65 |
+
190
|
| 66 |
+
],
|
| 67 |
+
"page_idx": 0
|
| 68 |
+
},
|
| 69 |
+
{
|
| 70 |
+
"type": "text",
|
| 71 |
+
"text": "marapi@seas.upenn.edu",
|
| 72 |
+
"bbox": [
|
| 73 |
+
557,
|
| 74 |
+
192,
|
| 75 |
+
773,
|
| 76 |
+
206
|
| 77 |
+
],
|
| 78 |
+
"page_idx": 0
|
| 79 |
+
},
|
| 80 |
+
{
|
| 81 |
+
"type": "text",
|
| 82 |
+
"text": "Abstract",
|
| 83 |
+
"text_level": 1,
|
| 84 |
+
"bbox": [
|
| 85 |
+
260,
|
| 86 |
+
260,
|
| 87 |
+
339,
|
| 88 |
+
275
|
| 89 |
+
],
|
| 90 |
+
"page_idx": 0
|
| 91 |
+
},
|
| 92 |
+
{
|
| 93 |
+
"type": "text",
|
| 94 |
+
"text": "Embedding spaces contain interpretable dimensions indicating gender, formality in style, or even object properties. This has been observed multiple times. Such interpretable dimensions are becoming valuable tools in different areas of study, from social science to neuroscience. The standard way to compute these dimensions uses contrasting seed words and computes difference vectors over them. This is simple but does not always work well. We combine seed-based vectors with guidance from human ratings of where words fall along a specific dimension, and evaluate on predicting both object properties like size and danger, and the stylistic properties of formality and complexity. We obtain interpretable dimensions with markedly better performance especially in cases where seed-based dimensions do not work well.",
|
| 95 |
+
"bbox": [
|
| 96 |
+
141,
|
| 97 |
+
288,
|
| 98 |
+
460,
|
| 99 |
+
544
|
| 100 |
+
],
|
| 101 |
+
"page_idx": 0
|
| 102 |
+
},
|
| 103 |
+
{
|
| 104 |
+
"type": "text",
|
| 105 |
+
"text": "1 Introduction",
|
| 106 |
+
"text_level": 1,
|
| 107 |
+
"bbox": [
|
| 108 |
+
114,
|
| 109 |
+
557,
|
| 110 |
+
258,
|
| 111 |
+
571
|
| 112 |
+
],
|
| 113 |
+
"page_idx": 0
|
| 114 |
+
},
|
| 115 |
+
{
|
| 116 |
+
"type": "text",
|
| 117 |
+
"text": "Properties are commonly used in linguistics (Katz and Fodor, 1964; Jackendoff, 1990; Wierzbicka, 1996) as well as in psychology (Murphy, 2002) for representing word meanings and concepts. Those same properties are discoverable as interpretable dimensions in word embedding space, and can be used to explore the patterns and regularities encoded by Large Language Models (LLMs) (Mikolov et al., 2013b; Bolukbasi et al., 2016). Because LLMs are trained on texts from many different authors, we can view them as a compact repository of human utterances. This makes them an interesting resource for studying linguistic phenomena, analyzing social contexts of words, or as a stand-in for conceptual knowledge for interpreting brain voxels. Interpretable dimensions provide an attractive and simple way to access this resource (Grand et al., 2022; Kozlowski et al., 2019; Garí Soler and Apidianaki, 2020; Lucy et al., 2022). Compared to probing (Tenney et al., 2019; Conneau et al., 2018), interpretable dimensions allow for a direct explo",
|
| 118 |
+
"bbox": [
|
| 119 |
+
112,
|
| 120 |
+
583,
|
| 121 |
+
490,
|
| 122 |
+
921
|
| 123 |
+
],
|
| 124 |
+
"page_idx": 0
|
| 125 |
+
},
|
| 126 |
+
{
|
| 127 |
+
"type": "image",
|
| 128 |
+
"img_path": "images/6e7e6781cef932dda5216f0f523a0f923ffe26ca15ab88040aeaabcd6df72ff6.jpg",
|
| 129 |
+
"image_caption": [
|
| 130 |
+
"Figure 1: Interpretable dimensions for two object categories and features from Grand et al. (2022): clothes by wealth (left), animals by size (right). PCA projection of embeddings, with seed-based (blue) and FIT+S (red) dimensions."
|
| 131 |
+
],
|
| 132 |
+
"image_footnote": [],
|
| 133 |
+
"bbox": [
|
| 134 |
+
515,
|
| 135 |
+
260,
|
| 136 |
+
685,
|
| 137 |
+
378
|
| 138 |
+
],
|
| 139 |
+
"page_idx": 0
|
| 140 |
+
},
|
| 141 |
+
{
|
| 142 |
+
"type": "image",
|
| 143 |
+
"img_path": "images/bc439eac6238999b759907e63f3f6de7b49beeb8eb3e5c5f3b9f7a27e0e09422.jpg",
|
| 144 |
+
"image_caption": [],
|
| 145 |
+
"image_footnote": [],
|
| 146 |
+
"bbox": [
|
| 147 |
+
705,
|
| 148 |
+
258,
|
| 149 |
+
880,
|
| 150 |
+
379
|
| 151 |
+
],
|
| 152 |
+
"page_idx": 0
|
| 153 |
+
},
|
| 154 |
+
{
|
| 155 |
+
"type": "text",
|
| 156 |
+
"text": "ration of LLM embedding space, without external classifiers.",
|
| 157 |
+
"bbox": [
|
| 158 |
+
507,
|
| 159 |
+
493,
|
| 160 |
+
882,
|
| 161 |
+
524
|
| 162 |
+
],
|
| 163 |
+
"page_idx": 0
|
| 164 |
+
},
|
| 165 |
+
{
|
| 166 |
+
"type": "text",
|
| 167 |
+
"text": "The most common way to obtain interpretable dimensions is to specify some seed pairs of antonyms, and take the average over their vector differences. But it is unclear what makes good seed pairs, or even how to test whether a particular property corresponds to a discernible dimension in embedding space. Antoniak and Mimno (2021) and Lucy et al. (2022) express concerns about the quality of commonly used hand-curated seed lexicons and propose metrics for evaluating seeds.",
|
| 168 |
+
"bbox": [
|
| 169 |
+
507,
|
| 170 |
+
527,
|
| 171 |
+
884,
|
| 172 |
+
687
|
| 173 |
+
],
|
| 174 |
+
"page_idx": 0
|
| 175 |
+
},
|
| 176 |
+
{
|
| 177 |
+
"type": "text",
|
| 178 |
+
"text": "In this paper, we take a different approach to addressing the problem of \"bad seeds\" (Antoniak and Mimno, 2021): We propose a method to augment seed-based interpretable dimensions with additional guidance from human ratings, and we show that this augmentation is particularly impactful when the original seed-based dimensions are problematic. Figure 1 shows words for clothes with dimensions for wealth, and animal names with dimensions for size, blue for seed-based dimensions and red for our new fitted dimensions. The fitted dimensions correct overly high wealth estimates for",
|
| 179 |
+
"bbox": [
|
| 180 |
+
507,
|
| 181 |
+
689,
|
| 182 |
+
884,
|
| 183 |
+
882
|
| 184 |
+
],
|
| 185 |
+
"page_idx": 0
|
| 186 |
+
},
|
| 187 |
+
{
|
| 188 |
+
"type": "page_footnote",
|
| 189 |
+
"text": "1Our code and data are available at https://github. com/mariannaapi/interpretable-dimensions.",
|
| 190 |
+
"bbox": [
|
| 191 |
+
507,
|
| 192 |
+
894,
|
| 193 |
+
882,
|
| 194 |
+
919
|
| 195 |
+
],
|
| 196 |
+
"page_idx": 0
|
| 197 |
+
},
|
| 198 |
+
{
|
| 199 |
+
"type": "page_number",
|
| 200 |
+
"text": "2675",
|
| 201 |
+
"bbox": [
|
| 202 |
+
480,
|
| 203 |
+
927,
|
| 204 |
+
519,
|
| 205 |
+
940
|
| 206 |
+
],
|
| 207 |
+
"page_idx": 0
|
| 208 |
+
},
|
| 209 |
+
{
|
| 210 |
+
"type": "footer",
|
| 211 |
+
"text": "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics:",
|
| 212 |
+
"bbox": [
|
| 213 |
+
139,
|
| 214 |
+
945,
|
| 215 |
+
857,
|
| 216 |
+
957
|
| 217 |
+
],
|
| 218 |
+
"page_idx": 0
|
| 219 |
+
},
|
| 220 |
+
{
|
| 221 |
+
"type": "footer",
|
| 222 |
+
"text": "Human Language Technologies (Volume 1: Long Papers), pages 2675-2686",
|
| 223 |
+
"bbox": [
|
| 224 |
+
268,
|
| 225 |
+
959,
|
| 226 |
+
731,
|
| 227 |
+
971
|
| 228 |
+
],
|
| 229 |
+
"page_idx": 0
|
| 230 |
+
},
|
| 231 |
+
{
|
| 232 |
+
"type": "footer",
|
| 233 |
+
"text": "June 16-21, 2024 ©2024 Association for Computational Linguistics",
|
| 234 |
+
"bbox": [
|
| 235 |
+
292,
|
| 236 |
+
972,
|
| 237 |
+
705,
|
| 238 |
+
985
|
| 239 |
+
],
|
| 240 |
+
"page_idx": 0
|
| 241 |
+
},
|
| 242 |
+
{
|
| 243 |
+
"type": "text",
|
| 244 |
+
"text": "tights and stockings, and exaggerated size estimates for bee and ant.",
|
| 245 |
+
"bbox": [
|
| 246 |
+
112,
|
| 247 |
+
84,
|
| 248 |
+
485,
|
| 249 |
+
115
|
| 250 |
+
],
|
| 251 |
+
"page_idx": 1
|
| 252 |
+
},
|
| 253 |
+
{
|
| 254 |
+
"type": "text",
|
| 255 |
+
"text": "While interpretable dimensions are useful both to social science and to cognitive science, there is an important difference between the fields: In social science, crowd-sourced datasets cannot be trusted in absolute terms, because annotators may have social biases of their own; this is the scenario that Antoniak and Mimno (2021) address. In cognitive science however, experimental data from human participants is central (though the method used to solicit data can have an influence on the outcome). We work with data from cognitive science here, so we can use human ratings to improve seed dimensions.",
|
| 256 |
+
"bbox": [
|
| 257 |
+
112,
|
| 258 |
+
117,
|
| 259 |
+
487,
|
| 260 |
+
325
|
| 261 |
+
],
|
| 262 |
+
"page_idx": 1
|
| 263 |
+
},
|
| 264 |
+
{
|
| 265 |
+
"type": "text",
|
| 266 |
+
"text": "Our new method draws inspiration from a completely separate strand of research on interpretable dimensions, in the context of knowledge graph embeddings (Derrac and Schockaert, 2015; Jameel and Schockaert, 2016; Bouraoui et al., 2022). There, interpretable dimensions are learned using labeled training data. In the current paper, we use a similar learning strategy, and apply it to a combination of seed-based dimensions and labeled training data. We apply this technique to predict human ratings on object properties and stylistic aspects of words, and find that it improves performance particularly in cases where seed-based dimensions underperform, and that in contrast to seed-based dimensions it is able to make predictions at the same scale as the original ratings.",
|
| 267 |
+
"bbox": [
|
| 268 |
+
115,
|
| 269 |
+
328,
|
| 270 |
+
489,
|
| 271 |
+
583
|
| 272 |
+
],
|
| 273 |
+
"page_idx": 1
|
| 274 |
+
},
|
| 275 |
+
{
|
| 276 |
+
"type": "text",
|
| 277 |
+
"text": "A larger issue is: When can we trust that an interpretable dimension shows us what the LLM truly \"knows\" about the property in question, that we are not misled by noise in our tool? This same worry also arises for probing classifiers (Hewitt and Liang, 2019; Belinkov, 2022).<sup>2</sup> We take one step towards addressing this issue: By combining two sources of information, seeds and human annotation, we hope to reduce noise present in either source.",
|
| 278 |
+
"bbox": [
|
| 279 |
+
112,
|
| 280 |
+
587,
|
| 281 |
+
489,
|
| 282 |
+
746
|
| 283 |
+
],
|
| 284 |
+
"page_idx": 1
|
| 285 |
+
},
|
| 286 |
+
{
|
| 287 |
+
"type": "text",
|
| 288 |
+
"text": "2 Related Work",
|
| 289 |
+
"text_level": 1,
|
| 290 |
+
"bbox": [
|
| 291 |
+
112,
|
| 292 |
+
762,
|
| 293 |
+
268,
|
| 294 |
+
778
|
| 295 |
+
],
|
| 296 |
+
"page_idx": 1
|
| 297 |
+
},
|
| 298 |
+
{
|
| 299 |
+
"type": "text",
|
| 300 |
+
"text": "Interpretable dimensions in word embedding space have first been observed in NLP (Bolukbasi et al.,",
|
| 301 |
+
"bbox": [
|
| 302 |
+
112,
|
| 303 |
+
791,
|
| 304 |
+
489,
|
| 305 |
+
822
|
| 306 |
+
],
|
| 307 |
+
"page_idx": 1
|
| 308 |
+
},
|
| 309 |
+
{
|
| 310 |
+
"type": "text",
|
| 311 |
+
"text": "2016), and the idea was then taken up in neuroscience and social science. Grand et al. (2022) discover dimensions for objects' scalar properties (e.g., DANGER, SIZE, SPEED, WEALTH). Kozlowski et al. (2019) identify dimensions including AFFLUENCE, GENDER, RACE and MORALITY, and show that concepts such as sports (e.g., golf, boxing) and music genres (e.g., opera, rap, jazz) are ordered along these axes in ways that match cultural stereotypes. Garg et al. (2018) explore ethnic stereotypes, and Stoltz and Taylor (2021) go as far as to propose a cultural cartography with word embeddings. An et al. (2018) use a large number of dimensions to characterize sentiment, which Kwak et al. (2021) apply to whole documents. Interpretable dimensions have also been used to represent linguistic notions, such as complexity and scalar adjective intensity (Garí Soler and Apidianaki, 2020, 2021b; Lyu et al., 2023). In our work, we explore dimensions addressing object properties in the Grand et al. (2022) datasets, and the abstract notions of formality and complexity.",
|
| 312 |
+
"bbox": [
|
| 313 |
+
507,
|
| 314 |
+
84,
|
| 315 |
+
884,
|
| 316 |
+
437
|
| 317 |
+
],
|
| 318 |
+
"page_idx": 1
|
| 319 |
+
},
|
| 320 |
+
{
|
| 321 |
+
"type": "text",
|
| 322 |
+
"text": "In all these studies, dimensions are discovered using the seed-based methodology, where a few seed pairs of antonyms are specified and the dimension is computed as the average over vector differences for these pairs. This method is simpler than alternative representation approaches (e.g., the multi-task learning framework of Allaway and McKeown (2021)).",
|
| 323 |
+
"bbox": [
|
| 324 |
+
507,
|
| 325 |
+
437,
|
| 326 |
+
884,
|
| 327 |
+
565
|
| 328 |
+
],
|
| 329 |
+
"page_idx": 1
|
| 330 |
+
},
|
| 331 |
+
{
|
| 332 |
+
"type": "text",
|
| 333 |
+
"text": "Seed pair selection has until now been ad hoc; but some choices, such as the selected word pairs, their number and order, and the way they are combined, have a strong impact on the quality of the derived dimension. Antoniak and Mimno (2021) address the \"bad seeds\" problem by measuring the coherence of each seed set pairing after mapping to the bias subspace: When all words in the vocabulary are projected onto the subspace, the two seed sets must be drawn as far apart as possible. Lucy et al. (2022) propose to measure the semantic axis' self-consistency using a leave-one-out approach, where each seed is compared to an axis constructed from the remaining seeds. A good seed, when left out, should be closer to the pole it belongs to.",
|
| 334 |
+
"bbox": [
|
| 335 |
+
507,
|
| 336 |
+
567,
|
| 337 |
+
882,
|
| 338 |
+
808
|
| 339 |
+
],
|
| 340 |
+
"page_idx": 1
|
| 341 |
+
},
|
| 342 |
+
{
|
| 343 |
+
"type": "text",
|
| 344 |
+
"text": "In our approach, we do not test for seed quality. Instead, we use human ratings to improve on seed-based dimensions. Our approach is inspired by work on knowledge graph embeddings (Derrac and Schockaert, 2015; Jameel and Schockaert, 2016; Bouraoui et al., 2020). Drawing on the conceptual spaces of Gärdenfors (2014) for intuition,",
|
| 345 |
+
"bbox": [
|
| 346 |
+
507,
|
| 347 |
+
809,
|
| 348 |
+
884,
|
| 349 |
+
921
|
| 350 |
+
],
|
| 351 |
+
"page_idx": 1
|
| 352 |
+
},
|
| 353 |
+
{
|
| 354 |
+
"type": "page_footnote",
|
| 355 |
+
"text": "There is a related question of whether information encoded in the model is also relevant for downstream performance (Ravichander et al., 2021; Lyu et al., 2024). This question is not so central for linguists or cognitive scientists interested in knowledge reflected in an LLM. What is central for them is that the dimension really encodes the property of interest.",
|
| 356 |
+
"bbox": [
|
| 357 |
+
112,
|
| 358 |
+
835,
|
| 359 |
+
489,
|
| 360 |
+
919
|
| 361 |
+
],
|
| 362 |
+
"page_idx": 1
|
| 363 |
+
},
|
| 364 |
+
{
|
| 365 |
+
"type": "page_number",
|
| 366 |
+
"text": "2676",
|
| 367 |
+
"bbox": [
|
| 368 |
+
480,
|
| 369 |
+
928,
|
| 370 |
+
519,
|
| 371 |
+
940
|
| 372 |
+
],
|
| 373 |
+
"page_idx": 1
|
| 374 |
+
},
|
| 375 |
+
{
|
| 376 |
+
"type": "text",
|
| 377 |
+
"text": "Jameel and Schockaert (2016) learn embeddings of knowledge graph nodes that include interpretable dimensions for properties. Like us, they learn interpretable dimensions using labeled training data. Our objective function is an adaptation of their objective function, but still different as they also learn the space while we have a fixed space.",
|
| 378 |
+
"bbox": [
|
| 379 |
+
110,
|
| 380 |
+
84,
|
| 381 |
+
489,
|
| 382 |
+
197
|
| 383 |
+
],
|
| 384 |
+
"page_idx": 2
|
| 385 |
+
},
|
| 386 |
+
{
|
| 387 |
+
"type": "text",
|
| 388 |
+
"text": "For constructing interpretable dimensions, most previous work used static embeddings (GloVe (Pennington et al., 2014) and word2vec (Mikolov et al., 2013a)). Recent work extends the methodology to contextualized representations (Garí Soler and Apidianaki, 2020; Lucy et al., 2022). We experiment with both kinds of embeddings.",
|
| 389 |
+
"bbox": [
|
| 390 |
+
112,
|
| 391 |
+
198,
|
| 392 |
+
489,
|
| 393 |
+
310
|
| 394 |
+
],
|
| 395 |
+
"page_idx": 2
|
| 396 |
+
},
|
| 397 |
+
{
|
| 398 |
+
"type": "text",
|
| 399 |
+
"text": "3 Methods",
|
| 400 |
+
"text_level": 1,
|
| 401 |
+
"bbox": [
|
| 402 |
+
112,
|
| 403 |
+
324,
|
| 404 |
+
225,
|
| 405 |
+
338
|
| 406 |
+
],
|
| 407 |
+
"page_idx": 2
|
| 408 |
+
},
|
| 409 |
+
{
|
| 410 |
+
"type": "text",
|
| 411 |
+
"text": "3.1 Models",
|
| 412 |
+
"text_level": 1,
|
| 413 |
+
"bbox": [
|
| 414 |
+
112,
|
| 415 |
+
351,
|
| 416 |
+
218,
|
| 417 |
+
365
|
| 418 |
+
],
|
| 419 |
+
"page_idx": 2
|
| 420 |
+
},
|
| 421 |
+
{
|
| 422 |
+
"type": "text",
|
| 423 |
+
"text": "Seed-based dimensions (SEED model). The seed-based method is the most commonly used for computing interpretable dimensions (Bolukbasi et al., 2016; Kozlowski et al., 2019; Dev and Phillips, 2019; Garí Soler and Apidianaki, 2021b; Grand et al., 2022). A group of seed words are chosen which represent opposite ends of the dimension. For the DANGER dimension in Grand et al. (2022), for example, the seeds are {safe, harmless, calm} for the positive side and {dangerous, deadly, threatening} for the negative side. For each pair of a positive and negative seed word $p, n$ with vectors $\\vec{p}, \\vec{n}$ , the difference vector $\\vec{p} - \\vec{n}$ is computed; this is a first estimate of the interpretable dimension, but the vectors can differ substantially across seed pairs. To obtain a more stable estimate, the vector for the interpretable dimension is then computed as the average of the difference vectors from individual seed pairs. The rating of any word $a$ on the property $d$ with interpretable dimension $\\vec{d}$ – in our example from above, DANGER – is then predicted as the scalar projection onto the dimension:",
|
| 424 |
+
"bbox": [
|
| 425 |
+
110,
|
| 426 |
+
373,
|
| 427 |
+
489,
|
| 428 |
+
728
|
| 429 |
+
],
|
| 430 |
+
"page_idx": 2
|
| 431 |
+
},
|
| 432 |
+
{
|
| 433 |
+
"type": "equation",
|
| 434 |
+
"text": "\n$$\n| | \\operatorname {p r o j} _ {\\vec {a}} (\\vec {d}) | | = \\frac {\\vec {a} \\cdot \\vec {d}}{| | \\vec {d} | |}\n$$\n",
|
| 435 |
+
"text_format": "latex",
|
| 436 |
+
"bbox": [
|
| 437 |
+
226,
|
| 438 |
+
740,
|
| 439 |
+
378,
|
| 440 |
+
780
|
| 441 |
+
],
|
| 442 |
+
"page_idx": 2
|
| 443 |
+
},
|
| 444 |
+
{
|
| 445 |
+
"type": "text",
|
| 446 |
+
"text": "Fitted dimensions (FIT model). Whenever we have gold ratings on some dimension, like human judgments on degrees of danger of different animals (Grand et al., 2022) or gold ratings for complexity (Lyu et al., 2023), we can estimate a direction in embedding space that best matches the gold ratings. We adapted an idea from Jameel and Schockaert (2016), who learn an embedding space",
|
| 447 |
+
"bbox": [
|
| 448 |
+
112,
|
| 449 |
+
791,
|
| 450 |
+
489,
|
| 451 |
+
921
|
| 452 |
+
],
|
| 453 |
+
"page_idx": 2
|
| 454 |
+
},
|
| 455 |
+
{
|
| 456 |
+
"type": "text",
|
| 457 |
+
"text": "for knowledge graph nodes in such a way that properties of the nodes correspond to dimensions in space. But rather than learning a new space, we need to use an existing space spanned by static or contextualized embeddings, because it is these spaces, and the patterns in human language use that they encode, that we want to analyze.",
|
| 458 |
+
"bbox": [
|
| 459 |
+
507,
|
| 460 |
+
84,
|
| 461 |
+
884,
|
| 462 |
+
197
|
| 463 |
+
],
|
| 464 |
+
"page_idx": 2
|
| 465 |
+
},
|
| 466 |
+
{
|
| 467 |
+
"type": "text",
|
| 468 |
+
"text": "We proceed as follows. Let $W = \\langle w_1, \\dots, w_n \\rangle$ be an annotated dataset of $n$ words with real-valued gold ratings $\\hat{Y} = \\langle \\hat{y}_1, \\dots, \\hat{y}_n \\rangle$ for some feature $f$ . Let $\\vec{w}_i$ be the embedding of word $w_i$ . For the dimension $\\vec{f}$ to be computed for feature $f$ in that same embedding space, we stipulate that the scalar projection of $\\vec{w}_i$ onto $\\vec{f}$ be proportional to the gold rating $\\hat{y}_i$ . For example, say the gold rating (average human rating) of dolphin on the DANGER scale (on a scale of 1-5) is 2.1, and the gold rating of tiger is 4.9. Then we want the length of the projection $\\mathrm{proj}_{\\mathrm{dolphi}}(\\mathrm{DANGER})$ to be proportional to $c_{\\mathrm{DANGER}} \\cdot 2.1 + b_{\\mathrm{DANGER}}$ , and the length of the projection $\\mathrm{proj}_{\\mathrm{tiger}}(\\mathrm{DANGER})$ to be proportional to $c_{\\mathrm{DANGER}} \\cdot 4.9 + b_{\\mathrm{DANGER}}$ , for some weight and bias constants $c_{\\mathrm{DANGER}}, b_{\\mathrm{DANGER}} \\in \\mathbb{R}$ . So in general, we would like to have",
|
| 469 |
+
"bbox": [
|
| 470 |
+
507,
|
| 471 |
+
198,
|
| 472 |
+
884,
|
| 473 |
+
469
|
| 474 |
+
],
|
| 475 |
+
"page_idx": 2
|
| 476 |
+
},
|
| 477 |
+
{
|
| 478 |
+
"type": "equation",
|
| 479 |
+
"text": "\n$$\n\\frac {\\vec {w _ {i}} \\cdot \\vec {f}}{| | \\vec {f} | |} = c _ {f} \\hat {y} _ {i} + b _ {f}\n$$\n",
|
| 480 |
+
"text_format": "latex",
|
| 481 |
+
"bbox": [
|
| 482 |
+
618,
|
| 483 |
+
476,
|
| 484 |
+
771,
|
| 485 |
+
516
|
| 486 |
+
],
|
| 487 |
+
"page_idx": 2
|
| 488 |
+
},
|
| 489 |
+
{
|
| 490 |
+
"type": "text",
|
| 491 |
+
"text": "We turn this into a loss function for computing a fitted dimension $f$ , dropping the denominator $||\\vec{f}||$ :",
|
| 492 |
+
"bbox": [
|
| 493 |
+
505,
|
| 494 |
+
523,
|
| 495 |
+
882,
|
| 496 |
+
556
|
| 497 |
+
],
|
| 498 |
+
"page_idx": 2
|
| 499 |
+
},
|
| 500 |
+
{
|
| 501 |
+
"type": "equation",
|
| 502 |
+
"text": "\n$$\nJ _ {f} = \\sum_ {w _ {i}} \\left(\\vec {w _ {i}} \\cdot \\vec {f} - c _ {f} \\hat {y} _ {i} - b _ {f}\\right) ^ {2}\n$$\n",
|
| 503 |
+
"text_format": "latex",
|
| 504 |
+
"bbox": [
|
| 505 |
+
573,
|
| 506 |
+
564,
|
| 507 |
+
820,
|
| 508 |
+
598
|
| 509 |
+
],
|
| 510 |
+
"page_idx": 2
|
| 511 |
+
},
|
| 512 |
+
{
|
| 513 |
+
"type": "text",
|
| 514 |
+
"text": "Fitted dimensions with seed words (FIT+SW model). We also test whether fitted dimensions can be guided by the seed words used to make seed-based dimensions. The first method follows the intuition of Antoniak and Mimno (2021) that the scalar projections of seed words should sit \"far out\" on an interpretable dimension, further than other words. The FIT+SW model simply extends the collection $W$ of rated words by the seed words. We make synthetic gold ratings for the seedwords, giving them extreme ratings: $\\max (\\hat{Y}) + o$ for positive seed words, and $\\min (\\hat{Y}) - o$ for negative seedwords, for an offset $o$ that is a hyperparameter. We optionally add a small amount of random jitter (sampled from the interval [0.001, 0.005]) so that the seed words don't all have the same rating.",
|
| 515 |
+
"bbox": [
|
| 516 |
+
507,
|
| 517 |
+
607,
|
| 518 |
+
882,
|
| 519 |
+
865
|
| 520 |
+
],
|
| 521 |
+
"page_idx": 2
|
| 522 |
+
},
|
| 523 |
+
{
|
| 524 |
+
"type": "text",
|
| 525 |
+
"text": "Fitted dimensions with seed dimensions (FIT+SD model). Our second way of extending fitted dimensions with seed word information is built on",
|
| 526 |
+
"bbox": [
|
| 527 |
+
507,
|
| 528 |
+
873,
|
| 529 |
+
882,
|
| 530 |
+
920
|
| 531 |
+
],
|
| 532 |
+
"page_idx": 2
|
| 533 |
+
},
|
| 534 |
+
{
|
| 535 |
+
"type": "page_number",
|
| 536 |
+
"text": "2677",
|
| 537 |
+
"bbox": [
|
| 538 |
+
480,
|
| 539 |
+
927,
|
| 540 |
+
519,
|
| 541 |
+
940
|
| 542 |
+
],
|
| 543 |
+
"page_idx": 2
|
| 544 |
+
},
|
| 545 |
+
{
|
| 546 |
+
"type": "text",
|
| 547 |
+
"text": "the idea that seed-based dimensions and human ratings both provide useful information for fitting an interpretable dimension, and that they should be combined. So we use an overall loss function of",
|
| 548 |
+
"bbox": [
|
| 549 |
+
112,
|
| 550 |
+
84,
|
| 551 |
+
487,
|
| 552 |
+
148
|
| 553 |
+
],
|
| 554 |
+
"page_idx": 3
|
| 555 |
+
},
|
| 556 |
+
{
|
| 557 |
+
"type": "equation",
|
| 558 |
+
"text": "\n$$\nJ = \\alpha J _ {f} + (1 - \\alpha) J _ {d} (D)\n$$\n",
|
| 559 |
+
"text_format": "latex",
|
| 560 |
+
"bbox": [
|
| 561 |
+
200,
|
| 562 |
+
162,
|
| 563 |
+
401,
|
| 564 |
+
180
|
| 565 |
+
],
|
| 566 |
+
"page_idx": 3
|
| 567 |
+
},
|
| 568 |
+
{
|
| 569 |
+
"type": "text",
|
| 570 |
+
"text": "where $J_{f}$ is the loss function from above, and $\\alpha$ is a hyperparameter. $J_{d}(D)$ is a loss that measures distance of the fitted dimension $\\vec{f}$ from a set $D$ of seed-based dimensions, defined as",
|
| 571 |
+
"bbox": [
|
| 572 |
+
112,
|
| 573 |
+
192,
|
| 574 |
+
489,
|
| 575 |
+
255
|
| 576 |
+
],
|
| 577 |
+
"page_idx": 3
|
| 578 |
+
},
|
| 579 |
+
{
|
| 580 |
+
"type": "equation",
|
| 581 |
+
"text": "\n$$\nJ _ {d} (D) = \\sum_ {d \\in D} 1 - \\operatorname {c o s i n e} (\\vec {d}, \\vec {f})\n$$\n",
|
| 582 |
+
"text_format": "latex",
|
| 583 |
+
"bbox": [
|
| 584 |
+
181,
|
| 585 |
+
267,
|
| 586 |
+
418,
|
| 587 |
+
300
|
| 588 |
+
],
|
| 589 |
+
"page_idx": 3
|
| 590 |
+
},
|
| 591 |
+
{
|
| 592 |
+
"type": "text",
|
| 593 |
+
"text": "Fitted dimensions with seeds as words and dimensions (FIT+S model). Our final model uses seeds both as seed words, as in FIT+SW, and as seed dimensions, as in FIT+SD.",
|
| 594 |
+
"bbox": [
|
| 595 |
+
112,
|
| 596 |
+
312,
|
| 597 |
+
489,
|
| 598 |
+
375
|
| 599 |
+
],
|
| 600 |
+
"page_idx": 3
|
| 601 |
+
},
|
| 602 |
+
{
|
| 603 |
+
"type": "text",
|
| 604 |
+
"text": "Baselines. We compare our methods to a baseline which ranks words by frequency (FREQ). Frequency has been a strong baseline for complexity and formality in previous work, given that rare words tend to be more complex than frequently used words (Brooke et al., 2010). We use log-transformed frequency counts in the Google N-gram corpus (Brants and Franz, 2006). We also use a random baseline, which assigns to each word a randomly selected score in the range [-3, 3].<sup>3</sup>",
|
| 605 |
+
"bbox": [
|
| 606 |
+
112,
|
| 607 |
+
386,
|
| 608 |
+
489,
|
| 609 |
+
546
|
| 610 |
+
],
|
| 611 |
+
"page_idx": 3
|
| 612 |
+
},
|
| 613 |
+
{
|
| 614 |
+
"type": "text",
|
| 615 |
+
"text": "3.2 Evaluation metrics",
|
| 616 |
+
"text_level": 1,
|
| 617 |
+
"bbox": [
|
| 618 |
+
112,
|
| 619 |
+
558,
|
| 620 |
+
309,
|
| 621 |
+
571
|
| 622 |
+
],
|
| 623 |
+
"page_idx": 3
|
| 624 |
+
},
|
| 625 |
+
{
|
| 626 |
+
"type": "text",
|
| 627 |
+
"text": "In contrast to interpretable dimensions computed from seed words, the FIT models use training data: words with human ratings for the property in question. When we evaluate these models, we use up some of the data for training, leaving less for testing. To mitigate the issue, we do cross-validation, and we focus on evaluation metrics that work well with smaller test sets. We do not use the correlation metrics used in Gari Soler and Apidianaki (2020) and Grand et al. (2022), as significance tests become unreliable with small datasets. Instead, we use a variant of pairwise rank accuracy, a metric used in Gari Soler and Apidianaki (2020), Cocos et al. (2018), and Grand et al. (2022).",
|
| 628 |
+
"bbox": [
|
| 629 |
+
112,
|
| 630 |
+
579,
|
| 631 |
+
489,
|
| 632 |
+
804
|
| 633 |
+
],
|
| 634 |
+
"page_idx": 3
|
| 635 |
+
},
|
| 636 |
+
{
|
| 637 |
+
"type": "text",
|
| 638 |
+
"text": "Pairwise rank accuracy measures the percentage of pairs of words whose ordering in the gold ratings is the same as in the model predictions. We define a new variant which we call extended pairwise rank accuracy, $\\mathbf{r}^{+}$ -acc, which measures pairwise",
|
| 639 |
+
"bbox": [
|
| 640 |
+
112,
|
| 641 |
+
806,
|
| 642 |
+
487,
|
| 643 |
+
885
|
| 644 |
+
],
|
| 645 |
+
"page_idx": 3
|
| 646 |
+
},
|
| 647 |
+
{
|
| 648 |
+
"type": "text",
|
| 649 |
+
"text": "rank accuracy among words in the test set, and additionally pairwise rank accuracy between each test word and each training word. For example, if tiger and butterfly are in the training set for DANGER, and cat is in the test, we check whether the score assigned to cat ranks it after tiger and before butterfly. This metric gives us more evidence on the quality of predictions than pairwise rank accuracy on its own because it includes more comparisons, thus making the metric less sparse. Let $<_{g}, <_{m}$ be two complete orderings of the words in $W$ , the gold and model orderings, respectively. For words $w, w'$ in $W$ , we define an auxiliary function $rm$ for \"rank match\":",
|
| 650 |
+
"bbox": [
|
| 651 |
+
505,
|
| 652 |
+
84,
|
| 653 |
+
884,
|
| 654 |
+
307
|
| 655 |
+
],
|
| 656 |
+
"page_idx": 3
|
| 657 |
+
},
|
| 658 |
+
{
|
| 659 |
+
"type": "equation",
|
| 660 |
+
"text": "\n$$\nr m _ {< g, < m} (w, w ^ {\\prime}) = \\left\\{ \\begin{array}{l l} 1 \\text {i f f} (w < _ {g} w ^ {\\prime} \\wedge w < _ {m} w ^ {\\prime}) \\\\ \\quad \\vee (w > _ {g} w ^ {\\prime} \\wedge w > _ {m} w ^ {\\prime}) \\\\ 0 \\text {e l s e} \\end{array} \\right.\n$$\n",
|
| 661 |
+
"text_format": "latex",
|
| 662 |
+
"bbox": [
|
| 663 |
+
507,
|
| 664 |
+
313,
|
| 665 |
+
894,
|
| 666 |
+
365
|
| 667 |
+
],
|
| 668 |
+
"page_idx": 3
|
| 669 |
+
},
|
| 670 |
+
{
|
| 671 |
+
"type": "text",
|
| 672 |
+
"text": "Then standard pairwise rank accuracy on $W = \\langle w_1, \\ldots, w_n \\rangle$ is defined as",
|
| 673 |
+
"bbox": [
|
| 674 |
+
509,
|
| 675 |
+
370,
|
| 676 |
+
882,
|
| 677 |
+
401
|
| 678 |
+
],
|
| 679 |
+
"page_idx": 3
|
| 680 |
+
},
|
| 681 |
+
{
|
| 682 |
+
"type": "equation",
|
| 683 |
+
"text": "\n$$\n\\begin{array}{l} \\mathrm {r - a c c} _ {W} (< _ {g}, < _ {m}) = \\frac {1}{n (n - 1)} \\\\ \\sum_ {1 \\leq i < j \\leq n} r m _ {< g, < m} \\left(w _ {i}, w _ {j}\\right) \\\\ \\end{array}\n$$\n",
|
| 684 |
+
"text_format": "latex",
|
| 685 |
+
"bbox": [
|
| 686 |
+
554,
|
| 687 |
+
409,
|
| 688 |
+
825,
|
| 689 |
+
449
|
| 690 |
+
],
|
| 691 |
+
"page_idx": 3
|
| 692 |
+
},
|
| 693 |
+
{
|
| 694 |
+
"type": "text",
|
| 695 |
+
"text": "Now assume that $T = \\{k_1, \\dots, k_\\ell\\}$ , with $k_j \\in \\{1, \\dots, n\\}$ for all $j$ , is the set of test word indices among the indices of $W$ . Assume both orderings, $<_g$ and $<_m$ , are defined on all of $W$ . Then our new extended pairwise rank accuracy is",
|
| 696 |
+
"bbox": [
|
| 697 |
+
507,
|
| 698 |
+
454,
|
| 699 |
+
882,
|
| 700 |
+
535
|
| 701 |
+
],
|
| 702 |
+
"page_idx": 3
|
| 703 |
+
},
|
| 704 |
+
{
|
| 705 |
+
"type": "equation",
|
| 706 |
+
"text": "\n$$\n\\begin{array}{l} \\mathrm {r} ^ {+} - \\operatorname {a c c} _ {W} (< _ {g}, < _ {m}) = \\frac {1}{\\ell (\\ell - 1) + \\ell (n - \\ell)} \\\\ \\sum_ {1 \\leq i < j \\leq \\ell} r m _ {< g, < m} \\left(w _ {k _ {i}}, w _ {k _ {j}}\\right) + \\\\ \\sum_ {i \\in \\{1, \\dots , \\ell \\}, j \\notin T} r m _ {< g, < m} \\left(w _ {k _ {i}}, w _ {j}\\right) \\\\ \\end{array}\n$$\n",
|
| 707 |
+
"text_format": "latex",
|
| 708 |
+
"bbox": [
|
| 709 |
+
529,
|
| 710 |
+
542,
|
| 711 |
+
848,
|
| 712 |
+
596
|
| 713 |
+
],
|
| 714 |
+
"page_idx": 3
|
| 715 |
+
},
|
| 716 |
+
{
|
| 717 |
+
"type": "text",
|
| 718 |
+
"text": "The first half of this formula measures pairwise rank accuracy among members of the test set; the second half measures rank accuracy of test words with respect to training words.",
|
| 719 |
+
"bbox": [
|
| 720 |
+
505,
|
| 721 |
+
602,
|
| 722 |
+
882,
|
| 723 |
+
667
|
| 724 |
+
],
|
| 725 |
+
"page_idx": 3
|
| 726 |
+
},
|
| 727 |
+
{
|
| 728 |
+
"type": "text",
|
| 729 |
+
"text": "Pairwise rank accuracy and extended pairwise rank accuracy are similar to correlation metrics in that they measure to what extent gold and model-based rankings agree. And in fact all three metrics are highly correlated: We tested correlation between the three metrics on seed-based dimensions for the Grand et al. (2022) data and obtained highly significant correlations $(p\\ll 0.0001,r = 0.972$ for pairwise rank accuracy, $r = 0.971$ for extended pairwise rank accuracy).4",
|
| 730 |
+
"bbox": [
|
| 731 |
+
505,
|
| 732 |
+
668,
|
| 733 |
+
882,
|
| 734 |
+
829
|
| 735 |
+
],
|
| 736 |
+
"page_idx": 3
|
| 737 |
+
},
|
| 738 |
+
{
|
| 739 |
+
"type": "page_footnote",
|
| 740 |
+
"text": "4To measure extended pairwise rank accuracy, the data was split into training and test folds in 5-fold cross-validation, and $\\mathbf{r}^{+}$ -acc scores were averaged over folds. It is not surprising that the values are almost the same, as in this case extended pairwise rank accuracy is almost the same as standard pairwise rank accuracy except that the latter omits some pairwise comparisons, namely between training data points.",
|
| 741 |
+
"bbox": [
|
| 742 |
+
507,
|
| 743 |
+
835,
|
| 744 |
+
882,
|
| 745 |
+
921
|
| 746 |
+
],
|
| 747 |
+
"page_idx": 3
|
| 748 |
+
},
|
| 749 |
+
{
|
| 750 |
+
"type": "page_footnote",
|
| 751 |
+
"text": "3This range was chosen because all gold ratings in our study are z-scores.",
|
| 752 |
+
"bbox": [
|
| 753 |
+
112,
|
| 754 |
+
894,
|
| 755 |
+
487,
|
| 756 |
+
921
|
| 757 |
+
],
|
| 758 |
+
"page_idx": 3
|
| 759 |
+
},
|
| 760 |
+
{
|
| 761 |
+
"type": "page_number",
|
| 762 |
+
"text": "2678",
|
| 763 |
+
"bbox": [
|
| 764 |
+
480,
|
| 765 |
+
927,
|
| 766 |
+
519,
|
| 767 |
+
940
|
| 768 |
+
],
|
| 769 |
+
"page_idx": 3
|
| 770 |
+
},
|
| 771 |
+
{
|
| 772 |
+
"type": "text",
|
| 773 |
+
"text": "As a second evaluation metric, we test how far off from the gold ratings each individual predicted rating is. We use the mean square error (MSE) of predicted ratings compared to gold ratings. We can do this because all FIT models learn to predict ratings on the same scale as the gold ratings. In order to apply the same evaluation to the SEED model and the baselines, we simply use linear regression to map model predictions to ratings on the same scale as the gold ratings. Linear regression models are fit on the training portion of each data set, so that test words remain truly unseen.",
|
| 774 |
+
"bbox": [
|
| 775 |
+
112,
|
| 776 |
+
84,
|
| 777 |
+
492,
|
| 778 |
+
280
|
| 779 |
+
],
|
| 780 |
+
"page_idx": 4
|
| 781 |
+
},
|
| 782 |
+
{
|
| 783 |
+
"type": "text",
|
| 784 |
+
"text": "3.3 Data and Vectors",
|
| 785 |
+
"text_level": 1,
|
| 786 |
+
"bbox": [
|
| 787 |
+
112,
|
| 788 |
+
288,
|
| 789 |
+
295,
|
| 790 |
+
304
|
| 791 |
+
],
|
| 792 |
+
"page_idx": 4
|
| 793 |
+
},
|
| 794 |
+
{
|
| 795 |
+
"type": "text",
|
| 796 |
+
"text": "We use the ratings collected by Grand et al. (2022) which describe properties of objects in nine categories: animals, clothing, professions, weather phenomena, sports, mythological creatures, world cities, states of the United States, and first names. Each category is matched with a subset of these semantic features: age, arousal, cost, danger, gender, intelligence, location (indoors vs. outdoors), partisanship (liberal vs. conservative), religiosity, size, speed, temperature, valence, volume, wealth, weight, and wetness. For style, we use datasets released by Pavlick and Nenkova (2015) which contain words and phrases with human ratings of formality and complexity. For each dimension, we sample words with high annotation confidence (i.e. where annotators agreed about the word being complex or formal): We calculate the mean standard deviation for words in our sample, and keep words where deviation between human scores is lower than that mean. The filtered datasets contain 1,160 words for complexity, and 1,274 words for formality.",
|
| 797 |
+
"bbox": [
|
| 798 |
+
112,
|
| 799 |
+
309,
|
| 800 |
+
490,
|
| 801 |
+
664
|
| 802 |
+
],
|
| 803 |
+
"page_idx": 4
|
| 804 |
+
},
|
| 805 |
+
{
|
| 806 |
+
"type": "text",
|
| 807 |
+
"text": "We extract seed words from two other datasets released by Pavlick and Nenkova (2015) which contain pairwise paraphrase judgments of formality and complexity. Annotations reflect which phrase in a pair (e.g., letter-communication, largely-extensively) is more complex or formal than the other. We collect five pairs of words for each style",
|
| 808 |
+
"bbox": [
|
| 809 |
+
112,
|
| 810 |
+
665,
|
| 811 |
+
489,
|
| 812 |
+
778
|
| 813 |
+
],
|
| 814 |
+
"page_idx": 4
|
| 815 |
+
},
|
| 816 |
+
{
|
| 817 |
+
"type": "list",
|
| 818 |
+
"sub_type": "text",
|
| 819 |
+
"list_items": [
|
| 820 |
+
"5The data is available on the Open Science Framework at https://osf.io/5r2sz/. No license information is given in the repository.",
|
| 821 |
+
"Most categories consist of 50 items.",
|
| 822 |
+
"7We sample words with more than three characters to exclude pronouns, articles, numerals, and multiword phrases.",
|
| 823 |
+
"The data is available at https://cs.brown.edu/people/epavlick/data.html, under \"Style Lexicons: Human and automatic scores of formality and complexity for words, phrases, and sentences\". No license for the data is given."
|
| 824 |
+
],
|
| 825 |
+
"bbox": [
|
| 826 |
+
112,
|
| 827 |
+
785,
|
| 828 |
+
489,
|
| 829 |
+
920
|
| 830 |
+
],
|
| 831 |
+
"page_idx": 4
|
| 832 |
+
},
|
| 833 |
+
{
|
| 834 |
+
"type": "text",
|
| 835 |
+
"text": "dimension for which inter-rater agreement is high. For complexity, we obtain the seed pairs work - employment, further - subsequently, strong - powerful, train - railway, shown - indicated, where the first member of each pair is the negative seed (the simpler word). For formality, we used winner - recipient, terrible - disastrous, membership - affiliation, highest - paramount, test - verify, where again the first member of each pair is the negative seed (the less formal word).",
|
| 836 |
+
"bbox": [
|
| 837 |
+
505,
|
| 838 |
+
84,
|
| 839 |
+
884,
|
| 840 |
+
243
|
| 841 |
+
],
|
| 842 |
+
"page_idx": 4
|
| 843 |
+
},
|
| 844 |
+
{
|
| 845 |
+
"type": "text",
|
| 846 |
+
"text": "Following Grand et al. (2022), we averaged over human subject ratings for each datapoint, then normalized ratings to z-scores separately for each pair of a category and property. For formality and complexity, ratings were also converted to z-scores.",
|
| 847 |
+
"bbox": [
|
| 848 |
+
507,
|
| 849 |
+
246,
|
| 850 |
+
884,
|
| 851 |
+
326
|
| 852 |
+
],
|
| 853 |
+
"page_idx": 4
|
| 854 |
+
},
|
| 855 |
+
{
|
| 856 |
+
"type": "text",
|
| 857 |
+
"text": "As embeddings, we use the same representations as Grand et al. (2022), pre-trained GLoVE embeddings trained on the Common Crawl (42b tokens), 300 dimensions, uncased (Pennington et al., 2014). We also use contextualized representations from the BERT (bert-large-uncased) and RoBERTa (roberta-large) models (Devlin et al., 2019; Liu et al., 2019) with sentences from UkWac (Baroni et al., 2009). For each word instance, we average its contextualized representations from the top 4 layers of the model. If the word is split into multiple wordpieces during tokenization, we average the representations of its pieces in order to obtain a single type-level representation for each word, as is common practice in semantic probing studies (Bommasani et al., 2020; Vulic et al., 2020; Garí Soler and Apidianaki, 2021a). The final representation for a word is the average of its representations from the available sentences. Aggregating representations across multiple contexts is the most common approach for creating word type-level embeddings from contextualized representations which serves to null out, to some extent, the impact of specific contexts (Apidianaki, 2022). It is possible to use more sophisticated ways for sentence selection, such as language modeling criteria (Garí Soler and Apidianaki, 2020) and exclusion of contexts where antonyms could occur (Lucy et al., 2022). However applying such sophisticated context selection methods is not always",
|
| 858 |
+
"bbox": [
|
| 859 |
+
507,
|
| 860 |
+
326,
|
| 861 |
+
884,
|
| 862 |
+
810
|
| 863 |
+
],
|
| 864 |
+
"page_idx": 4
|
| 865 |
+
},
|
| 866 |
+
{
|
| 867 |
+
"type": "page_footnote",
|
| 868 |
+
"text": "<sup>9</sup>We would expect the data to show subjective differences between annotators, and in the future we would like to model the subjective ratings directly, following Plank et al. (2014).",
|
| 869 |
+
"bbox": [
|
| 870 |
+
507,
|
| 871 |
+
821,
|
| 872 |
+
882,
|
| 873 |
+
858
|
| 874 |
+
],
|
| 875 |
+
"page_idx": 4
|
| 876 |
+
},
|
| 877 |
+
{
|
| 878 |
+
"type": "page_footnote",
|
| 879 |
+
"text": "10 Details about sentence selection are given in the Appendix.",
|
| 880 |
+
"bbox": [
|
| 881 |
+
507,
|
| 882 |
+
858,
|
| 883 |
+
882,
|
| 884 |
+
883
|
| 885 |
+
],
|
| 886 |
+
"page_idx": 4
|
| 887 |
+
},
|
| 888 |
+
{
|
| 889 |
+
"type": "page_footnote",
|
| 890 |
+
"text": "11Averaging across layer subsets is generally better than averaging across all layers or selecting a single layer (Vulic et al., 2020).",
|
| 891 |
+
"bbox": [
|
| 892 |
+
507,
|
| 893 |
+
883,
|
| 894 |
+
882,
|
| 895 |
+
920
|
| 896 |
+
],
|
| 897 |
+
"page_idx": 4
|
| 898 |
+
},
|
| 899 |
+
{
|
| 900 |
+
"type": "page_number",
|
| 901 |
+
"text": "2679",
|
| 902 |
+
"bbox": [
|
| 903 |
+
480,
|
| 904 |
+
927,
|
| 905 |
+
519,
|
| 906 |
+
940
|
| 907 |
+
],
|
| 908 |
+
"page_idx": 4
|
| 909 |
+
},
|
| 910 |
+
{
|
| 911 |
+
"type": "text",
|
| 912 |
+
"text": "better than random selection, which might be due to the skewed distribution of word senses and the stronger presence of the most frequent sense of a word in randomly selected sentences (Kilgarriff, 2004; McCarthy et al., 2004).",
|
| 913 |
+
"bbox": [
|
| 914 |
+
112,
|
| 915 |
+
84,
|
| 916 |
+
489,
|
| 917 |
+
164
|
| 918 |
+
],
|
| 919 |
+
"page_idx": 5
|
| 920 |
+
},
|
| 921 |
+
{
|
| 922 |
+
"type": "text",
|
| 923 |
+
"text": "4 Results and discussion",
|
| 924 |
+
"text_level": 1,
|
| 925 |
+
"bbox": [
|
| 926 |
+
112,
|
| 927 |
+
175,
|
| 928 |
+
342,
|
| 929 |
+
191
|
| 930 |
+
],
|
| 931 |
+
"page_idx": 5
|
| 932 |
+
},
|
| 933 |
+
{
|
| 934 |
+
"type": "text",
|
| 935 |
+
"text": "In this section we evaluate the different interpretable dimension models from Section 3.1 on the tasks of predicting human ratings of object properties (Grand et al., 2022), and human ratings of the complexity and formality of words (Pavlick and Nenkova, 2015).",
|
| 936 |
+
"bbox": [
|
| 937 |
+
112,
|
| 938 |
+
200,
|
| 939 |
+
490,
|
| 940 |
+
297
|
| 941 |
+
],
|
| 942 |
+
"page_idx": 5
|
| 943 |
+
},
|
| 944 |
+
{
|
| 945 |
+
"type": "text",
|
| 946 |
+
"text": "Experimental setup. To make the most of the limited available data, all models were tested in 5-fold cross-validation. In addition, all models that involve randomness (all except SEED) were re-run three times with different random seeds. For Grand et al. object features, we first compute mean $r^+$ -acc and median MSE for each category/property pair (averaging over cross-validation runs and random seeds), then we report averages over those. For formality and complexity we report overall mean $r^+$ -acc and median MSE. $^{12}$ Note that because we split the data into training and test using cross-validation, the numbers reported in this paper are not comparable with those reported in earlier papers on the same dataset. We do however compute SEED dimensions with the same cross-validation setup as the FIT-based dimensions, so that the numbers that we report are comparable to each other.",
|
| 947 |
+
"bbox": [
|
| 948 |
+
115,
|
| 949 |
+
306,
|
| 950 |
+
489,
|
| 951 |
+
595
|
| 952 |
+
],
|
| 953 |
+
"page_idx": 5
|
| 954 |
+
},
|
| 955 |
+
{
|
| 956 |
+
"type": "text",
|
| 957 |
+
"text": "To set hyperparameters, we sample 6 category/property pairs from the Grand et al. data as development set. Hyperparameters were optimized once per embedding space; there was no separate hyperparameter optimization for the formality and complexity data.13 Overall, we find that low values of $\\alpha$ work well, and that it is beneficial to input a single averaged seed dimension to FIT+SD and FIT+S, rather than individual seed dimensions. The choice of offset and jitter does not matter. More details on hyperparameters and the development set can be found in the Appendix. Results reported below for Grand et al. data are for all category/property pairs not in the development set.",
|
| 958 |
+
"bbox": [
|
| 959 |
+
112,
|
| 960 |
+
596,
|
| 961 |
+
489,
|
| 962 |
+
821
|
| 963 |
+
],
|
| 964 |
+
"page_idx": 5
|
| 965 |
+
},
|
| 966 |
+
{
|
| 967 |
+
"type": "text",
|
| 968 |
+
"text": "Overall performance. Overall results are shown for object properties in Table 1 and for stylistic",
|
| 969 |
+
"bbox": [
|
| 970 |
+
112,
|
| 971 |
+
829,
|
| 972 |
+
487,
|
| 973 |
+
862
|
| 974 |
+
],
|
| 975 |
+
"page_idx": 5
|
| 976 |
+
},
|
| 977 |
+
{
|
| 978 |
+
"type": "text",
|
| 979 |
+
"text": "features in Table 2. Looking at object properties first, and focusing on extended rank accuracy, the FIT model by itself is not very good, and adding seeds as words (FIT+SW) does not help. FIT+SD is better, and outperforms SEED slightly, but the FIT+S model, which computes fitted dimensions using seed information both as words and dimensions, shows the best performance, outperforming SEED strongly. In terms of MSE, even medians are very high for the SEED model, so many seed-based dimensions were not able to predict ratings on the same scale as the gold ratings. MSE is much lower for both fitted models that make use of seed dimensions, especially FIT+S.[14] Looking at the baselines, the FIT and FIT+SW models with BERT and RoBERTa underperform the frequency baseline, and are on par with random guessing. The frequency baseline is somewhat higher than random, though it is not entirely clear what kind of signal for object properties can be derived from word frequency.",
|
| 980 |
+
"bbox": [
|
| 981 |
+
507,
|
| 982 |
+
84,
|
| 983 |
+
884,
|
| 984 |
+
420
|
| 985 |
+
],
|
| 986 |
+
"page_idx": 5
|
| 987 |
+
},
|
| 988 |
+
{
|
| 989 |
+
"type": "text",
|
| 990 |
+
"text": "On the stylistic data, the relative performance of the fitted models is similar, but here they mostly do not outperform the SEED dimensions. Overall performance of the SEED dimensions is higher here, which raises the question if fitted models help in particular when seed-based dimensions do not perform well; we explore this further below. Looking at MSE, we confirm that fitted models that use seed dimensions have much lower error than the other models. Comparing embedding spaces, we see consistently the best performance with GLOVE embeddings. The BERT and RoBERTa FIT and FIT+SW models in particular are again at the level of the random baseline. The frequency baseline is reasonably strong, matching previous findings.",
|
| 991 |
+
"bbox": [
|
| 992 |
+
507,
|
| 993 |
+
423,
|
| 994 |
+
884,
|
| 995 |
+
664
|
| 996 |
+
],
|
| 997 |
+
"page_idx": 5
|
| 998 |
+
},
|
| 999 |
+
{
|
| 1000 |
+
"type": "text",
|
| 1001 |
+
"text": "Fitted dimensions, by themselves, are underdetermined by human ratings. The FIT model, which computes fitted dimensions from human ratings only, does not perform well, and we suspect that the size of the embedding space allows for too many ways to fit a dimension to ratings, causing the model to overfit. To test this, we first computed FIT dimensions, for Grand et al. object features, from all human ratings, obtaining perfectly fit dimensions in every single case. We next train dimensions on all human ratings but scramble the word/rating pairs, making them nonsensical. Again we obtain perfectly fit dimensions in every single",
|
| 1002 |
+
"bbox": [
|
| 1003 |
+
507,
|
| 1004 |
+
675,
|
| 1005 |
+
884,
|
| 1006 |
+
884
|
| 1007 |
+
],
|
| 1008 |
+
"page_idx": 5
|
| 1009 |
+
},
|
| 1010 |
+
{
|
| 1011 |
+
"type": "page_footnote",
|
| 1012 |
+
"text": "Though note that the ratings are z-scores, so the MSE is still larger than half a standard deviation.",
|
| 1013 |
+
"bbox": [
|
| 1014 |
+
507,
|
| 1015 |
+
894,
|
| 1016 |
+
882,
|
| 1017 |
+
919
|
| 1018 |
+
],
|
| 1019 |
+
"page_idx": 5
|
| 1020 |
+
},
|
| 1021 |
+
{
|
| 1022 |
+
"type": "page_footnote",
|
| 1023 |
+
"text": "<sup>12</sup>We use median MSE because outliers make the mean uninformatively high.",
|
| 1024 |
+
"bbox": [
|
| 1025 |
+
112,
|
| 1026 |
+
869,
|
| 1027 |
+
487,
|
| 1028 |
+
895
|
| 1029 |
+
],
|
| 1030 |
+
"page_idx": 5
|
| 1031 |
+
},
|
| 1032 |
+
{
|
| 1033 |
+
"type": "page_footnote",
|
| 1034 |
+
"text": "13Human ratings on all datasets were on the same scales as we normalized them all to z-scores.",
|
| 1035 |
+
"bbox": [
|
| 1036 |
+
112,
|
| 1037 |
+
894,
|
| 1038 |
+
485,
|
| 1039 |
+
919
|
| 1040 |
+
],
|
| 1041 |
+
"page_idx": 5
|
| 1042 |
+
},
|
| 1043 |
+
{
|
| 1044 |
+
"type": "page_number",
|
| 1045 |
+
"text": "2680",
|
| 1046 |
+
"bbox": [
|
| 1047 |
+
480,
|
| 1048 |
+
928,
|
| 1049 |
+
519,
|
| 1050 |
+
940
|
| 1051 |
+
],
|
| 1052 |
+
"page_idx": 5
|
| 1053 |
+
},
|
| 1054 |
+
{
|
| 1055 |
+
"type": "table",
|
| 1056 |
+
"img_path": "images/80a252a0bd612555bd4a3eb1b584c2b8a2007d3581f8a6c43c979d8aaa8a20b6.jpg",
|
| 1057 |
+
"table_caption": [],
|
| 1058 |
+
"table_footnote": [],
|
| 1059 |
+
"table_body": "<table><tr><td colspan=\"2\"></td><td colspan=\"2\">SEED</td><td colspan=\"2\">FIT</td><td colspan=\"2\">FIT+SW</td><td colspan=\"2\">FIT+SD</td><td colspan=\"2\">FIT+S</td></tr><tr><td rowspan=\"2\">GLoVE</td><td>r+acc</td><td>0.64</td><td>(0.1)</td><td>0.54</td><td>(0.03)</td><td>0.53</td><td>(0.03)</td><td>0.65</td><td>(0.1)</td><td>0.80</td><td>(0.06)</td></tr><tr><td>MSE</td><td colspan=\"2\">>1000 (>1000)</td><td>113.2</td><td>(111.7)</td><td>177.1</td><td>(125.4)</td><td>89.6</td><td>(199.6)</td><td>0.7</td><td>(0.36)</td></tr><tr><td rowspan=\"2\">BERT</td><td>r+acc</td><td>0.64</td><td>(0.1)</td><td>0.51</td><td>(0.03)</td><td>0.52</td><td>(0.03)</td><td>0.66</td><td>(0.10)</td><td>0.71</td><td>(0.04)</td></tr><tr><td>MSE</td><td colspan=\"2\">>1000 (>1000)</td><td>417.4</td><td>(271.7)</td><td>597.4</td><td>(525.6)</td><td>115.0</td><td>(437.4)</td><td>2.0</td><td>(0.6)</td></tr><tr><td rowspan=\"2\">RoBERTa</td><td>r+acc</td><td>0.57</td><td>(0.08)</td><td>0.51</td><td>(0.03)</td><td>0.51</td><td>(0.03)</td><td>0.60</td><td>(0.1)</td><td>0.69</td><td>(0.04)</td></tr><tr><td>MSE</td><td colspan=\"2\">>1000 (>1000)</td><td>392.5</td><td>(291.3)</td><td>458.0</td><td>(284.3)</td><td>125.2</td><td>(270.5)</td><td>1.9</td><td>(0.6)</td></tr></table>",
|
| 1060 |
+
"bbox": [
|
| 1061 |
+
114,
|
| 1062 |
+
80,
|
| 1063 |
+
890,
|
| 1064 |
+
170
|
| 1065 |
+
],
|
| 1066 |
+
"page_idx": 6
|
| 1067 |
+
},
|
| 1068 |
+
{
|
| 1069 |
+
"type": "table",
|
| 1070 |
+
"img_path": "images/df4cb4153ee784fdadb71b21c1cf42329208e5f076c11a66136a477479f7798c.jpg",
|
| 1071 |
+
"table_caption": [
|
| 1072 |
+
"Table 1: Results on object properties: Extended rank accuracy (abbreviated $\\mathbf{r}^{+}$ -acc) and Mean Squared Error (MSE), averaged over category/property pairs. In brackets: Standard error. Shown for 3 embedding spaces. Bolded: best performance for each embedding."
|
| 1073 |
+
],
|
| 1074 |
+
"table_footnote": [],
|
| 1075 |
+
"table_body": "<table><tr><td rowspan=\"2\" colspan=\"2\"></td><td colspan=\"5\">Complexity</td><td colspan=\"6\">Formality</td></tr><tr><td>SEED</td><td>FIT</td><td>FIT+SW</td><td>FIT+SD</td><td>FIT+S</td><td rowspan=\"7\">FREQ r+-acc: 0.65 RAND r+-acc: 0.50</td><td>SEED</td><td>FIT</td><td>FIT+SW</td><td>FIT+SD</td><td>FIT+S</td></tr><tr><td rowspan=\"2\">GLoVE</td><td>r+-acc</td><td>0.74</td><td>0.59</td><td>0.57</td><td>0.76</td><td>0.72</td><td>0.73</td><td>0.53</td><td>0.37</td><td>0.68</td><td>0.69</td></tr><tr><td>MSE</td><td>31.5</td><td>24.3</td><td>75.1</td><td>1.5</td><td>1.2</td><td>60.5</td><td>396.5</td><td>285.7</td><td>1.8</td><td>1.6</td></tr><tr><td rowspan=\"2\">BERT</td><td>r+-acc</td><td>0.69</td><td>0.52</td><td>0.52</td><td>0.71</td><td>0.72</td><td>0.64</td><td>0.52</td><td>0.51</td><td>0.64</td><td>0.69</td></tr><tr><td>MSE</td><td>123.5</td><td>437.2</td><td>724.0</td><td>3.6</td><td>2.4</td><td>215.6</td><td>216.3</td><td>617.1</td><td>7.8</td><td>3.2</td></tr><tr><td rowspan=\"2\">RoBERTa</td><td>r+-acc</td><td>0.74</td><td>0.51</td><td>0.51</td><td>0.71</td><td>0.73</td><td>0.67</td><td>0.53</td><td>0.53</td><td>0.66</td><td>0.71</td></tr><tr><td>MSE</td><td>82.7</td><td>591.9</td><td>>1000</td><td>3.9</td><td>2.3</td><td>223.0</td><td>325.0</td><td>778.1</td><td>7.3</td><td>2.4</td></tr></table>",
|
| 1076 |
+
"bbox": [
|
| 1077 |
+
114,
|
| 1078 |
+
235,
|
| 1079 |
+
890,
|
| 1080 |
+
336
|
| 1081 |
+
],
|
| 1082 |
+
"page_idx": 6
|
| 1083 |
+
},
|
| 1084 |
+
{
|
| 1085 |
+
"type": "text",
|
| 1086 |
+
"text": "Table 2: Results on formality and complexity: Extended rank accuracy $(\\mathbf{r}^{+}$ -acc) and Mean Squared Error (MSE). Shown for 3 embedding spaces. Bolded: best performance for each embedding.",
|
| 1087 |
+
"bbox": [
|
| 1088 |
+
112,
|
| 1089 |
+
344,
|
| 1090 |
+
882,
|
| 1091 |
+
374
|
| 1092 |
+
],
|
| 1093 |
+
"page_idx": 6
|
| 1094 |
+
},
|
| 1095 |
+
{
|
| 1096 |
+
"type": "image",
|
| 1097 |
+
"img_path": "images/71936f961c1931e36699b6dd10531467e3cb2d0eb89d773a8a5fc996d88dfeae.jpg",
|
| 1098 |
+
"image_caption": [
|
| 1099 |
+
"Figure 2: Increase in $r^+$ -acc for FIT+S over SEED, object properties grouped by performance of SEED."
|
| 1100 |
+
],
|
| 1101 |
+
"image_footnote": [],
|
| 1102 |
+
"bbox": [
|
| 1103 |
+
173,
|
| 1104 |
+
398,
|
| 1105 |
+
430,
|
| 1106 |
+
548
|
| 1107 |
+
],
|
| 1108 |
+
"page_idx": 6
|
| 1109 |
+
},
|
| 1110 |
+
{
|
| 1111 |
+
"type": "text",
|
| 1112 |
+
"text": "case, which confirms our suspicion. The picture that emerges is that FIT by itself does not have enough information to fit a good dimension and overfits to the training data. The seed information provided to FIT+SW, FIT+SD and FIT+S gives the models the additional guidance needed to make good use of the human ratings, and the combination of seeds and human ratings on words leads to overall better dimensions – at least in some cases. We next ask which cases those are.",
|
| 1113 |
+
"bbox": [
|
| 1114 |
+
112,
|
| 1115 |
+
615,
|
| 1116 |
+
489,
|
| 1117 |
+
776
|
| 1118 |
+
],
|
| 1119 |
+
"page_idx": 6
|
| 1120 |
+
},
|
| 1121 |
+
{
|
| 1122 |
+
"type": "text",
|
| 1123 |
+
"text": "Human ratings help most when seed-based dimensions underperform. Comparing $r^+$ -acc values for seed-based dimensions and FIT+S dimensions on the object property data, we find that FIT+S improves over SEED in every single one of the 50 category/property pairs. $^{15}$ The performance",
|
| 1124 |
+
"bbox": [
|
| 1125 |
+
112,
|
| 1126 |
+
787,
|
| 1127 |
+
489,
|
| 1128 |
+
885
|
| 1129 |
+
],
|
| 1130 |
+
"page_idx": 6
|
| 1131 |
+
},
|
| 1132 |
+
{
|
| 1133 |
+
"type": "text",
|
| 1134 |
+
"text": "increase is highest when performance of the seed-based dimensions is lowest, as shown in Figure 2: For the $20\\%$ of category/property pairs with lowest SEED performance, average improvement is 27.3 points, while for the $20\\%$ of category/property pairs with the highest SEED performance, average improvement is 1.7 points. This could explain the lack of improvement achieved on stylistic features, as SEED already performs well on this data.",
|
| 1135 |
+
"bbox": [
|
| 1136 |
+
507,
|
| 1137 |
+
399,
|
| 1138 |
+
884,
|
| 1139 |
+
543
|
| 1140 |
+
],
|
| 1141 |
+
"page_idx": 6
|
| 1142 |
+
},
|
| 1143 |
+
{
|
| 1144 |
+
"type": "text",
|
| 1145 |
+
"text": "Table 3 further zooms in on the object feature data, showing performance on some category/property pairs with low, medium, and high performance of the SEED dimensions. We see that FIT and $\\mathrm{FIT + SW}$ underperform throughout. $\\mathrm{FIT + S}$ shows the overall best performance, but the improvement over SEED is particularly high for the first group of conditions, where SEED dimensions get no traction on the data. $\\mathrm{FIT + SD}$ shows good extended rank accuracy on the conditions with medium to high SEED performance, but not on the conditions that are particularly poorly modeled by SEED.",
|
| 1146 |
+
"bbox": [
|
| 1147 |
+
507,
|
| 1148 |
+
546,
|
| 1149 |
+
885,
|
| 1150 |
+
755
|
| 1151 |
+
],
|
| 1152 |
+
"page_idx": 6
|
| 1153 |
+
},
|
| 1154 |
+
{
|
| 1155 |
+
"type": "text",
|
| 1156 |
+
"text": "FIT+S models are the only ones that predict ratings on the gold scale. We saw above that median MSE values are extremely high for many models, especially for SEED. We now take a closer look, in particular we want to know how often we obtain MSE values that are extremely far off from the gold ratings. We again focus on the object fea",
|
| 1157 |
+
"bbox": [
|
| 1158 |
+
507,
|
| 1159 |
+
769,
|
| 1160 |
+
885,
|
| 1161 |
+
882
|
| 1162 |
+
],
|
| 1163 |
+
"page_idx": 6
|
| 1164 |
+
},
|
| 1165 |
+
{
|
| 1166 |
+
"type": "page_footnote",
|
| 1167 |
+
"text": "15The analysis includes all conditions not in the development set. For each condition, we again ran 5-fold cross",
|
| 1168 |
+
"bbox": [
|
| 1169 |
+
112,
|
| 1170 |
+
894,
|
| 1171 |
+
489,
|
| 1172 |
+
921
|
| 1173 |
+
],
|
| 1174 |
+
"page_idx": 6
|
| 1175 |
+
},
|
| 1176 |
+
{
|
| 1177 |
+
"type": "page_footnote",
|
| 1178 |
+
"text": "validation, each repeated over 3 random seeds, then averaged by condition.",
|
| 1179 |
+
"bbox": [
|
| 1180 |
+
507,
|
| 1181 |
+
895,
|
| 1182 |
+
882,
|
| 1183 |
+
921
|
| 1184 |
+
],
|
| 1185 |
+
"page_idx": 6
|
| 1186 |
+
},
|
| 1187 |
+
{
|
| 1188 |
+
"type": "page_number",
|
| 1189 |
+
"text": "2681",
|
| 1190 |
+
"bbox": [
|
| 1191 |
+
480,
|
| 1192 |
+
928,
|
| 1193 |
+
517,
|
| 1194 |
+
940
|
| 1195 |
+
],
|
| 1196 |
+
"page_idx": 6
|
| 1197 |
+
},
|
| 1198 |
+
{
|
| 1199 |
+
"type": "table",
|
| 1200 |
+
"img_path": "images/e790ceef11f8728045543c20d87f9043fbbd54bb51611b29fa62737e2dce7e08.jpg",
|
| 1201 |
+
"table_caption": [],
|
| 1202 |
+
"table_footnote": [],
|
| 1203 |
+
"table_body": "<table><tr><td rowspan=\"2\">Category, Feature</td><td colspan=\"5\">r+ -acc</td></tr><tr><td>SEED</td><td>FIT</td><td>FIT+SW</td><td>FIT+SD</td><td>FIT+S</td></tr><tr><td>sports/speed</td><td>0.46</td><td>0.55</td><td>0.56</td><td>0.52</td><td>0.78</td></tr><tr><td>states/cost</td><td>0.46</td><td>0.5</td><td>0.5</td><td>0.42</td><td>0.83</td></tr><tr><td>cities/arousal</td><td>0.47</td><td>0.52</td><td>0.5</td><td>0.51</td><td>0.82</td></tr><tr><td>animals/intelligence</td><td>0.48</td><td>0.55</td><td>0.54</td><td>0.5</td><td>0.79</td></tr><tr><td>clothing/cost</td><td>0.48</td><td>0.52</td><td>0.51</td><td>0.55</td><td>0.76</td></tr><tr><td>clothing/wealth</td><td>0.62</td><td>0.52</td><td>0.53</td><td>0.6</td><td>0.82</td></tr><tr><td>states/wealth</td><td>0.62</td><td>0.55</td><td>0.56</td><td>0.64</td><td>0.82</td></tr><tr><td>weather/temperature</td><td>0.69</td><td>0.56</td><td>0.52</td><td>0.66</td><td>0.76</td></tr><tr><td>animals/danger</td><td>0.7</td><td>0.6</td><td>0.57</td><td>0.76</td><td>0.84</td></tr><tr><td>clothing/age</td><td>0.71</td><td>0.52</td><td>0.55</td><td>0.71</td><td>0.8</td></tr><tr><td>weather/danger</td><td>0.79</td><td>0.55</td><td>0.54</td><td>0.7</td><td>0.82</td></tr><tr><td>clothing/gender</td><td>0.81</td><td>0.56</td><td>0.53</td><td>0.8</td><td>0.82</td></tr><tr><td>sports/gender</td><td>0.81</td><td>0.61</td><td>0.56</td><td>0.81</td><td>0.84</td></tr><tr><td>professionals/gender</td><td>0.85</td><td>0.56</td><td>0.56</td><td>0.87</td><td>0.86</td></tr><tr><td>names/gender</td><td>0.87</td><td>0.56</td><td>0.51</td><td>0.87</td><td>0.87</td></tr></table>",
|
| 1204 |
+
"bbox": [
|
| 1205 |
+
117,
|
| 1206 |
+
80,
|
| 1207 |
+
495,
|
| 1208 |
+
259
|
| 1209 |
+
],
|
| 1210 |
+
"page_idx": 7
|
| 1211 |
+
},
|
| 1212 |
+
{
|
| 1213 |
+
"type": "image",
|
| 1214 |
+
"img_path": "images/9373b5ca37fbdcd26df498deba95d69870da03e71f0457132b4ffe9d598de6f1.jpg",
|
| 1215 |
+
"image_caption": [
|
| 1216 |
+
"Figure 3: MSE distributions for runs of different models. y-axis: count of runs."
|
| 1217 |
+
],
|
| 1218 |
+
"image_footnote": [],
|
| 1219 |
+
"bbox": [
|
| 1220 |
+
173,
|
| 1221 |
+
326,
|
| 1222 |
+
430,
|
| 1223 |
+
470
|
| 1224 |
+
],
|
| 1225 |
+
"page_idx": 7
|
| 1226 |
+
},
|
| 1227 |
+
{
|
| 1228 |
+
"type": "text",
|
| 1229 |
+
"text": "ture data as we there have many conditions that we can compare. Figure 3 shows, for each model, how many runs had MSE values in the ranges of $< 2, 2 - 10, 10 - 100$ , and $> 100$ . Recall that gold ratings are z-scores, so they tend to be in a range of -2 to 2. We again only use the category/property pairs that are not in the development set, but now count separately each cross-validation run and each random seed. We see that many runs of SEED, FIT and FIT+SW have very high MSE values. In FIT+SD we first see a considerable percentage of runs with MSE values below 2 (the blue bar comprises $20\\%$ of runs for this model), but strikingly, $97\\%$ runs of FIT+S have MSE values below 2, and all have values below 10. So this model is much more consistent than the other models, and in fact is highly consistent in fitting dimensions that deliver predictions in the range of the gold data.",
|
| 1230 |
+
"bbox": [
|
| 1231 |
+
112,
|
| 1232 |
+
539,
|
| 1233 |
+
489,
|
| 1234 |
+
829
|
| 1235 |
+
],
|
| 1236 |
+
"page_idx": 7
|
| 1237 |
+
},
|
| 1238 |
+
{
|
| 1239 |
+
"type": "text",
|
| 1240 |
+
"text": "Zooming in: Examples of predictions. We take a closer look at two kinds of object properties: clothes by wealth, and animals by size. In order to obtain sufficiently many test datapoints to look at, we divide the data into 2/3 training and 1/3",
|
| 1241 |
+
"bbox": [
|
| 1242 |
+
112,
|
| 1243 |
+
841,
|
| 1244 |
+
489,
|
| 1245 |
+
921
|
| 1246 |
+
],
|
| 1247 |
+
"page_idx": 7
|
| 1248 |
+
},
|
| 1249 |
+
{
|
| 1250 |
+
"type": "image",
|
| 1251 |
+
"img_path": "images/bf3988381688d1bc87ff90e740cdc699333d81bcde135e88a304025fa6967272.jpg",
|
| 1252 |
+
"image_caption": [
|
| 1253 |
+
"Figure 4: Clothes rated for wealth (left), animals rated for size (right). Gold ratings: dark purple. SEED predictions: red. FIT+S prediction: light blue. Datapoints ordered by gold rank."
|
| 1254 |
+
],
|
| 1255 |
+
"image_footnote": [],
|
| 1256 |
+
"bbox": [
|
| 1257 |
+
512,
|
| 1258 |
+
82,
|
| 1259 |
+
690,
|
| 1260 |
+
167
|
| 1261 |
+
],
|
| 1262 |
+
"page_idx": 7
|
| 1263 |
+
},
|
| 1264 |
+
{
|
| 1265 |
+
"type": "image",
|
| 1266 |
+
"img_path": "images/44da3beb2762245eac8e9099290c3f9296c4e95efa70582abc1b33900ea33adb.jpg",
|
| 1267 |
+
"image_caption": [],
|
| 1268 |
+
"image_footnote": [],
|
| 1269 |
+
"bbox": [
|
| 1270 |
+
705,
|
| 1271 |
+
82,
|
| 1272 |
+
882,
|
| 1273 |
+
166
|
| 1274 |
+
],
|
| 1275 |
+
"page_idx": 7
|
| 1276 |
+
},
|
| 1277 |
+
{
|
| 1278 |
+
"type": "table",
|
| 1279 |
+
"img_path": "images/3b655ba73055e62a38268f1aa1ebfb727ed7f6a8dfe91891de367cbc6b0facdd.jpg",
|
| 1280 |
+
"table_caption": [
|
| 1281 |
+
"Table 3: Detail results for Grand et al. by SEED performance: lowest performance (top box), middling performance (middle), best performance (bottom)."
|
| 1282 |
+
],
|
| 1283 |
+
"table_footnote": [],
|
| 1284 |
+
"table_body": "<table><tr><td>Gold</td><td>SEED</td><td>FIT+S</td></tr><tr><td>bee</td><td>chipmunk</td><td>butterfly</td></tr><tr><td>ant</td><td>hamster</td><td>bee</td></tr><tr><td>butterfly</td><td>monkey</td><td>chipmunk</td></tr><tr><td>goldfish</td><td>butterfly</td><td>bird</td></tr><tr><td>hamster</td><td>goldfish</td><td>hamster</td></tr><tr><td>chipmunk</td><td>dog</td><td>ant</td></tr><tr><td>bird</td><td>bee</td><td>crow</td></tr><tr><td>crow</td><td>bird</td><td>goldfish</td></tr><tr><td>dog</td><td>seal</td><td>seal</td></tr><tr><td>monkey</td><td>crow</td><td>dog</td></tr><tr><td>seal</td><td>ant</td><td>monkey</td></tr><tr><td>mammoth</td><td>whale</td><td>whale</td></tr><tr><td>whale</td><td>mammoth</td><td>mammoth</td></tr></table>",
|
| 1285 |
+
"bbox": [
|
| 1286 |
+
573,
|
| 1287 |
+
286,
|
| 1288 |
+
820,
|
| 1289 |
+
454
|
| 1290 |
+
],
|
| 1291 |
+
"page_idx": 7
|
| 1292 |
+
},
|
| 1293 |
+
{
|
| 1294 |
+
"type": "text",
|
| 1295 |
+
"text": "Table 4: Comparing word rankings by humans, SEED dimensions, and FIT+S dimensions: Animals by size. Italicized: 3 words with highest error in ranking.",
|
| 1296 |
+
"bbox": [
|
| 1297 |
+
507,
|
| 1298 |
+
466,
|
| 1299 |
+
882,
|
| 1300 |
+
508
|
| 1301 |
+
],
|
| 1302 |
+
"page_idx": 7
|
| 1303 |
+
},
|
| 1304 |
+
{
|
| 1305 |
+
"type": "text",
|
| 1306 |
+
"text": "test (as opposed to the 1/5 we use with 5-fold cross-validation). Figure 1 shows the test data words, along with seed-based and FIT+S dimensions, downprojected into 2 dimensions using PCA. For the same datapoints, Figure 4 plots gold ratings, SEED predictions, and FIT+S predictions. This plot illustrates how the SEED predictions are on a much larger scale than gold ratings, while FIT+S is the only model whose predictions stay on the same scale. (The next to last datapoint in animals/size is mammoth, which seed largely overestimates – maybe because mammoth is also an adjective indicating gargantuan size.) Tables 4 and 5 show how the test data words for animals by size, and for clothes by wealth, are ranked by humans, by the SEED dimension, and by the FIT+S dimension. The italicized words are the three words whose model rank is furthest off from their gold rank. For the animals data, both models mis-rank ant, and overall seem to struggle more with smaller animals. Among the clothes, both models overestimate the wealth projected by wearing hats.",
|
| 1307 |
+
"bbox": [
|
| 1308 |
+
507,
|
| 1309 |
+
567,
|
| 1310 |
+
884,
|
| 1311 |
+
921
|
| 1312 |
+
],
|
| 1313 |
+
"page_idx": 7
|
| 1314 |
+
},
|
| 1315 |
+
{
|
| 1316 |
+
"type": "page_number",
|
| 1317 |
+
"text": "2682",
|
| 1318 |
+
"bbox": [
|
| 1319 |
+
480,
|
| 1320 |
+
928,
|
| 1321 |
+
519,
|
| 1322 |
+
940
|
| 1323 |
+
],
|
| 1324 |
+
"page_idx": 7
|
| 1325 |
+
},
|
| 1326 |
+
{
|
| 1327 |
+
"type": "table",
|
| 1328 |
+
"img_path": "images/cbf987b674e931155eb17d6f1c8a58794c3a797c7cc80826fa6ff42f22aafdf7.jpg",
|
| 1329 |
+
"table_caption": [],
|
| 1330 |
+
"table_footnote": [],
|
| 1331 |
+
"table_body": "<table><tr><td>Gold</td><td>SEED</td><td>FIT+S</td></tr><tr><td>sweatshirt</td><td>shorts</td><td>shorts</td></tr><tr><td>shorts</td><td>sweatshirt</td><td>boots</td></tr><tr><td>belt</td><td>belt</td><td>bikini</td></tr><tr><td>boots</td><td>blouse</td><td>tights</td></tr><tr><td>hat</td><td>boots</td><td>skirt</td></tr><tr><td>tights</td><td>swimsuit</td><td>swimsuit</td></tr><tr><td>bikini</td><td>skirt</td><td>stockings</td></tr><tr><td>swimsuit</td><td>trousers</td><td>trousers</td></tr><tr><td>skirt</td><td>bikini</td><td>loafers</td></tr><tr><td>blouse</td><td>robe</td><td>blouse</td></tr><tr><td>knickers</td><td>cuff</td><td>belt</td></tr><tr><td>dress</td><td>knickers</td><td>knickers</td></tr><tr><td>collar</td><td>hat</td><td>dress</td></tr><tr><td>trousers</td><td>collar</td><td>sweatshirt</td></tr><tr><td>stockings</td><td>vest</td><td>robe</td></tr><tr><td>robe</td><td>tights</td><td>collar</td></tr><tr><td>vest</td><td>loafers</td><td>cuff</td></tr><tr><td>cuff</td><td>stockings</td><td>vest</td></tr><tr><td>loafers</td><td>dress</td><td>hat</td></tr><tr><td>gown</td><td>gown</td><td>tuxedo</td></tr><tr><td>tuxedo</td><td>tuxedo</td><td>gown</td></tr></table>",
|
| 1332 |
+
"bbox": [
|
| 1333 |
+
178,
|
| 1334 |
+
80,
|
| 1335 |
+
425,
|
| 1336 |
+
344
|
| 1337 |
+
],
|
| 1338 |
+
"page_idx": 8
|
| 1339 |
+
},
|
| 1340 |
+
{
|
| 1341 |
+
"type": "text",
|
| 1342 |
+
"text": "Table 5: Comparing word rankings by humans, SEED dimensions, and FIT+S dimensions: Clothes by wealth. Italicized: 3 words with highest error in ranking.",
|
| 1343 |
+
"bbox": [
|
| 1344 |
+
112,
|
| 1345 |
+
354,
|
| 1346 |
+
489,
|
| 1347 |
+
398
|
| 1348 |
+
],
|
| 1349 |
+
"page_idx": 8
|
| 1350 |
+
},
|
| 1351 |
+
{
|
| 1352 |
+
"type": "text",
|
| 1353 |
+
"text": "5 Conclusion",
|
| 1354 |
+
"text_level": 1,
|
| 1355 |
+
"bbox": [
|
| 1356 |
+
112,
|
| 1357 |
+
423,
|
| 1358 |
+
247,
|
| 1359 |
+
437
|
| 1360 |
+
],
|
| 1361 |
+
"page_idx": 8
|
| 1362 |
+
},
|
| 1363 |
+
{
|
| 1364 |
+
"type": "text",
|
| 1365 |
+
"text": "In this paper we have proposed a method for constructing high quality interpretable dimensions in embedding spaces. We show that by combining seed-based vectors with guidance from human ratings about properties, it is possible to induce better dimensions than with the seed-based methodology alone. We expect the proposed dimensions to be useful in various areas of study, including linguistics, psychology, and social science.",
|
| 1366 |
+
"bbox": [
|
| 1367 |
+
112,
|
| 1368 |
+
449,
|
| 1369 |
+
487,
|
| 1370 |
+
593
|
| 1371 |
+
],
|
| 1372 |
+
"page_idx": 8
|
| 1373 |
+
},
|
| 1374 |
+
{
|
| 1375 |
+
"type": "text",
|
| 1376 |
+
"text": "For the moment, the proposed dimensions address one property at a time. In future work, we are planning to explore multifaceted properties which would be better represented through multiple dimensions. Aside from a more elaborate description of these properties, a space of multiple interpretable dimensions will offer a rich context of comparison for words that might be similar in some respect and not in others (e.g., tiger and spider with respect to DANGER and SIZE).",
|
| 1377 |
+
"bbox": [
|
| 1378 |
+
112,
|
| 1379 |
+
594,
|
| 1380 |
+
489,
|
| 1381 |
+
753
|
| 1382 |
+
],
|
| 1383 |
+
"page_idx": 8
|
| 1384 |
+
},
|
| 1385 |
+
{
|
| 1386 |
+
"type": "text",
|
| 1387 |
+
"text": "6 Limitations",
|
| 1388 |
+
"text_level": 1,
|
| 1389 |
+
"bbox": [
|
| 1390 |
+
112,
|
| 1391 |
+
766,
|
| 1392 |
+
250,
|
| 1393 |
+
781
|
| 1394 |
+
],
|
| 1395 |
+
"page_idx": 8
|
| 1396 |
+
},
|
| 1397 |
+
{
|
| 1398 |
+
"type": "text",
|
| 1399 |
+
"text": "In our experiments we use English models and data. The seed-based methodology has been shown to work well in other languages, so an extension of the proposed methodology to other languages is possible. A limitation regarding this extension is the lack of human ratings which are needed for calculating the fitted dimensions. A possible mitigation would be to translate the annotated English",
|
| 1400 |
+
"bbox": [
|
| 1401 |
+
112,
|
| 1402 |
+
791,
|
| 1403 |
+
489,
|
| 1404 |
+
921
|
| 1405 |
+
],
|
| 1406 |
+
"page_idx": 8
|
| 1407 |
+
},
|
| 1408 |
+
{
|
| 1409 |
+
"type": "text",
|
| 1410 |
+
"text": "data into other languages.",
|
| 1411 |
+
"bbox": [
|
| 1412 |
+
507,
|
| 1413 |
+
84,
|
| 1414 |
+
702,
|
| 1415 |
+
99
|
| 1416 |
+
],
|
| 1417 |
+
"page_idx": 8
|
| 1418 |
+
},
|
| 1419 |
+
{
|
| 1420 |
+
"type": "text",
|
| 1421 |
+
"text": "The ratings we used in our study were averages over individual human ratings, possibly obscuring legitimate differences between raters (Plank et al., 2014). Another limitation of the human ratings used in this study is that they were out of context, possibly obscuring effects of topic and polysemy.",
|
| 1422 |
+
"bbox": [
|
| 1423 |
+
507,
|
| 1424 |
+
101,
|
| 1425 |
+
884,
|
| 1426 |
+
197
|
| 1427 |
+
],
|
| 1428 |
+
"page_idx": 8
|
| 1429 |
+
},
|
| 1430 |
+
{
|
| 1431 |
+
"type": "text",
|
| 1432 |
+
"text": "There are many different ways to use contextualized embeddings. We have averaged over all token representations generated by BERT and RoBERTa for a word in a sentence pool, and used the top 4 layers of the models. It is possible that BERT and RoBERTa would do better, or at least differently, if other model layers (or layer combinations) were used.",
|
| 1433 |
+
"bbox": [
|
| 1434 |
+
507,
|
| 1435 |
+
198,
|
| 1436 |
+
884,
|
| 1437 |
+
325
|
| 1438 |
+
],
|
| 1439 |
+
"page_idx": 8
|
| 1440 |
+
},
|
| 1441 |
+
{
|
| 1442 |
+
"type": "text",
|
| 1443 |
+
"text": "Our approach is not at all compute intensive. All computations were done on a laptop.",
|
| 1444 |
+
"bbox": [
|
| 1445 |
+
507,
|
| 1446 |
+
326,
|
| 1447 |
+
882,
|
| 1448 |
+
359
|
| 1449 |
+
],
|
| 1450 |
+
"page_idx": 8
|
| 1451 |
+
},
|
| 1452 |
+
{
|
| 1453 |
+
"type": "text",
|
| 1454 |
+
"text": "Acknowledgements",
|
| 1455 |
+
"text_level": 1,
|
| 1456 |
+
"bbox": [
|
| 1457 |
+
509,
|
| 1458 |
+
370,
|
| 1459 |
+
680,
|
| 1460 |
+
387
|
| 1461 |
+
],
|
| 1462 |
+
"page_idx": 8
|
| 1463 |
+
},
|
| 1464 |
+
{
|
| 1465 |
+
"type": "text",
|
| 1466 |
+
"text": "We would like to thank the anonymous reviewers for their valuable feedback. This research is supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via the HIA-TUS Program contract #2022-22072200005. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.",
|
| 1467 |
+
"bbox": [
|
| 1468 |
+
507,
|
| 1469 |
+
397,
|
| 1470 |
+
884,
|
| 1471 |
+
621
|
| 1472 |
+
],
|
| 1473 |
+
"page_idx": 8
|
| 1474 |
+
},
|
| 1475 |
+
{
|
| 1476 |
+
"type": "text",
|
| 1477 |
+
"text": "References",
|
| 1478 |
+
"text_level": 1,
|
| 1479 |
+
"bbox": [
|
| 1480 |
+
510,
|
| 1481 |
+
650,
|
| 1482 |
+
608,
|
| 1483 |
+
665
|
| 1484 |
+
],
|
| 1485 |
+
"page_idx": 8
|
| 1486 |
+
},
|
| 1487 |
+
{
|
| 1488 |
+
"type": "list",
|
| 1489 |
+
"sub_type": "ref_text",
|
| 1490 |
+
"list_items": [
|
| 1491 |
+
"Emily Allaway and Kathleen McKeown. 2021. A unified feature representation for lexical connotations. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2145-2163, Online. Association for Computational Linguistics.",
|
| 1492 |
+
"Jisun An, Haewoon Kwak, and Yong-Yeol Ahn. 2018. SemAxis: A lightweight framework to characterize domain-specific word semantics beyond sentiment. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2450-2461, Melbourne, Australia. Association for Computational Linguistics.",
|
| 1493 |
+
"Maria Antoniak and David Mimno. 2021. Bad seeds: Evaluating lexical methods for bias measurement. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the"
|
| 1494 |
+
],
|
| 1495 |
+
"bbox": [
|
| 1496 |
+
509,
|
| 1497 |
+
674,
|
| 1498 |
+
885,
|
| 1499 |
+
921
|
| 1500 |
+
],
|
| 1501 |
+
"page_idx": 8
|
| 1502 |
+
},
|
| 1503 |
+
{
|
| 1504 |
+
"type": "page_number",
|
| 1505 |
+
"text": "2683",
|
| 1506 |
+
"bbox": [
|
| 1507 |
+
480,
|
| 1508 |
+
928,
|
| 1509 |
+
519,
|
| 1510 |
+
940
|
| 1511 |
+
],
|
| 1512 |
+
"page_idx": 8
|
| 1513 |
+
},
|
| 1514 |
+
{
|
| 1515 |
+
"type": "list",
|
| 1516 |
+
"sub_type": "ref_text",
|
| 1517 |
+
"list_items": [
|
| 1518 |
+
"11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1889-1904, Online. Association for Computational Linguistics.",
|
| 1519 |
+
"Marianna Apidianaki. 2022. From Word Types to Tokens and Back: A Survey of Approaches to Word Meaning Representation and Interpretation. Computational Linguistics, 49(2):465-523.",
|
| 1520 |
+
"Marco Baroni, Silvia Bernardini, Adriano Ferraresi, and Eros Zanchetta. 2009. The WaCky wide web: a collection of very large linguistically processed web-crawled corpora. Journal of Language Resources and Evaluation, 43(3):209-226.",
|
| 1521 |
+
"Yonatan Belinkov. 2022. Probing classifiers: Promises, shortcomings, and advances. Computational Linguistics, 48(1):207-219.",
|
| 1522 |
+
"Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings. In Advances in Neural Information Processing Systems 29, pages 4349-4357, Barcelona, Spain.",
|
| 1523 |
+
"Rishi Bommasani, Kelly Davis, and Claire Cardie. 2020. Interpreting Pretrained Contextualized Representations via Reductions to Static Embeddings. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4758-4781, Online. Association for Computational Linguistics.",
|
| 1524 |
+
"Zied Bouraoui, José Camacho-Collados, and Steven Schockaert. 2020. Inducing Relational Knowledge from BERT. In *The Thirty-Fourth AAAI Conference on Artificial Intelligence*, pages 7456-7463, New York, NY, USA. AAAI Press.",
|
| 1525 |
+
"Zied Bouraoui, Víctor Gutiérrez-Basulto, and Steven Schockaert. 2022. Integrating ontologies and vector space embeddings using conceptual spaces. In International Research School in Artificial Intelligence in Bergen (AIB 2022), volume 99 of Open Access Series in Informatics (OASIs), pages 3:1-3:30, Dagstuhl, Germany. Schloss Dagstuhl – Leibniz-Zentrum für Informatik.",
|
| 1526 |
+
"Thorsten Brants and Alex Franz. 2006. Web 1T 5-gram Version 1. In LDC2006T13, Philadelphia, Pennsylvania. Linguistic Data Consortium.",
|
| 1527 |
+
"Julian Brooke, Tong Wang, and Graeme Hirst. 2010. Automatic acquisition of lexical formality. In *Coling* 2010: Posters, pages 90–98, Beijing, China. Coling 2010 Organizing Committee.",
|
| 1528 |
+
"Anne Cocos, Skyler Wharton, Ellie Pavlick, Marianna Apidianaki, and Chris Callison-Burch. 2018. Learning scalar adjective intensity from paraphrases. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1752-1762, Brussels, Belgium. Association for Computational Linguistics."
|
| 1529 |
+
],
|
| 1530 |
+
"bbox": [
|
| 1531 |
+
115,
|
| 1532 |
+
85,
|
| 1533 |
+
489,
|
| 1534 |
+
920
|
| 1535 |
+
],
|
| 1536 |
+
"page_idx": 9
|
| 1537 |
+
},
|
| 1538 |
+
{
|
| 1539 |
+
"type": "list",
|
| 1540 |
+
"sub_type": "ref_text",
|
| 1541 |
+
"list_items": [
|
| 1542 |
+
"Alexis Conneau, German Kruszewski, Guillaume Lample, Loic Barrault, and Marco Baroni. 2018. What you can cram into a single $\\$ \\& !\\#$ * vector: Probing sentence embeddings for linguistic properties. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2126-2136, Melbourne, Australia. Association for Computational Linguistics.",
|
| 1543 |
+
"Joaquín Derrac and Steven Schockaert. 2015. Inducing semantic relations from conceptual spaces: A data-driven approach to plausible reasoning. Artificial Intelligence, 228:66-94.",
|
| 1544 |
+
"Sunipa Dev and Jeff M Phillips. 2019. Attenuating Bias in Word Vectors. In Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics (AISTATS), Naha, Okinawa, Japan.",
|
| 1545 |
+
"Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.",
|
| 1546 |
+
"Nikhil Garg, Londa Schiebinger, Dan Jurafsky, and James Zou. 2018. Word embeddings quantify 100 years of gender and ethnic stereotypes. Proceedings of the National Academy of Sciences, 115(16):E3635-E3644.",
|
| 1547 |
+
"Aina Gari Soler and Marianna Apidianaki. 2020. BERT knows Punta Cana is not just beautiful, it's gorgeous: Ranking scalar adjectives with contextualised representations. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7371-7385, Online. Association for Computational Linguistics.",
|
| 1548 |
+
"Aina Garí Soler and Marianna Apidianaki. 2021a. Let's Play Mono-Poly: BERT Can Reveal Words' Polysemmy Level and Partitionability into Senses. Transactions of the Association for Computational Linguistics, 9:825-844.",
|
| 1549 |
+
"Aina Gari Soler and Marianna Apidianaki. 2021b. Scalar adjective identification and multilingual ranking. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4653-4660, Online. Association for Computational Linguistics.",
|
| 1550 |
+
"Gabriel Grand, Idan Asher Blank, Francisco Pereira, and Evelina Fedorenko. 2022. Semantic projection recovers rich human knowledge of multiple object features from word embeddings. Nature Human Behavior, 6:975-987.",
|
| 1551 |
+
"Peter Gärdenfors. 2014. The Geometry of Meaning: Semantics Based on Conceptual Spaces. The MIT Press, Cambridge, MA."
|
| 1552 |
+
],
|
| 1553 |
+
"bbox": [
|
| 1554 |
+
510,
|
| 1555 |
+
85,
|
| 1556 |
+
884,
|
| 1557 |
+
920
|
| 1558 |
+
],
|
| 1559 |
+
"page_idx": 9
|
| 1560 |
+
},
|
| 1561 |
+
{
|
| 1562 |
+
"type": "page_number",
|
| 1563 |
+
"text": "2684",
|
| 1564 |
+
"bbox": [
|
| 1565 |
+
480,
|
| 1566 |
+
928,
|
| 1567 |
+
519,
|
| 1568 |
+
940
|
| 1569 |
+
],
|
| 1570 |
+
"page_idx": 9
|
| 1571 |
+
},
|
| 1572 |
+
{
|
| 1573 |
+
"type": "list",
|
| 1574 |
+
"sub_type": "ref_text",
|
| 1575 |
+
"list_items": [
|
| 1576 |
+
"John Hewitt and Percy Liang. 2019. Designing and interpreting probes with control tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2733-2743, Hong Kong, China. Association for Computational Linguistics.",
|
| 1577 |
+
"Ray S. Jackendoff. 1990. Semantic Structures. The MIT Press, Cambridge, MA.",
|
| 1578 |
+
"Shoaib Jameel and Steven Schockaert. 2016. Entity embeddings with conceptual subspaces as a basis for plausible reasoning. In Proceedings of the Twenty-Second European Conference on Artificial Intelligence, ECAI'16, page 1353-1361, NLD. IOS Press.",
|
| 1579 |
+
"Jerrold J. Katz and Jerry A. Fodor. 1964. The structure of a semantic theory. In Jerry A. Fodor and Jerrold J. Katz, editors, The Structure of Language. Prentice-Hall, Englewood Cliffs, NJ.",
|
| 1580 |
+
"Adam Kilgarriff. 2004. How Dominant Is the Commonest Sense of a Word? Lecture Notes in Computer Science (vol. 3206), Text, Speech and Dialogue, Sojka Petr, Kopeček Ivan, Pala Karel (eds.), pages 103-112. Springer, Berlin, Heidelberg.",
|
| 1581 |
+
"Austin C. Kozlowski, Matt Taddy, and James A. Evans. 2019. The geometry of culture: Analyzing the meanings of class through word embeddings. American Sociological Review, 84(5):905-949.",
|
| 1582 |
+
"Haewoon Kwak, Jisun An, Elise Jing, and Yong-Yeol Ahn. 2021. Frameaxis: characterizing microframe bias and intensity with word embedding. PeerJ Computer Science, 7.",
|
| 1583 |
+
"Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint:1907.11692.",
|
| 1584 |
+
"Li Lucy, Divya Tadimeti, and David Bamman. 2022. Discovering differences in the representation of people using contextualized semantic axes. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 3477-3494, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.",
|
| 1585 |
+
"Qing Lyu, Marianna Apidianaki, and Chris Callison-Burch. 2023. Representation of lexical stylistic features in language models' embedding space. In Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM), Toronto, Canada.",
|
| 1586 |
+
"Qing Lyu, Marianna Apidianaki, and Chris Callison-Burch. 2024. Towards Faithful Model Explanation in NLP: A Survey. arXiv preprint:2209.11326.",
|
| 1587 |
+
"Diana McCarthy, Rob Koeling, Julie Weeds, and John Carroll. 2004. Finding predominant word senses in untagged text. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-04), pages 279–286, Barcelona, Spain."
|
| 1588 |
+
],
|
| 1589 |
+
"bbox": [
|
| 1590 |
+
115,
|
| 1591 |
+
85,
|
| 1592 |
+
489,
|
| 1593 |
+
920
|
| 1594 |
+
],
|
| 1595 |
+
"page_idx": 10
|
| 1596 |
+
},
|
| 1597 |
+
{
|
| 1598 |
+
"type": "list",
|
| 1599 |
+
"sub_type": "ref_text",
|
| 1600 |
+
"list_items": [
|
| 1601 |
+
"Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. CoRR, abs/1301.3781.",
|
| 1602 |
+
"Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013b. Linguistic regularities in continuous space word representations. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 746-751, Atlanta, Georgia. Association for Computational Linguistics.",
|
| 1603 |
+
"Gregory L. Murphy. 2002. The Big Book of Concepts. MIT Press, Boston, Mass.",
|
| 1604 |
+
"Ellie Pavlick and Ani Nenkova. 2015. Inducing lexical style properties for paraphrase and genre differentiation. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 218-224, Denver, Colorado. Association for Computational Linguistics.",
|
| 1605 |
+
"Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics.",
|
| 1606 |
+
"Barbara Plank, Dirk Hovy, and Anders Søgaard. 2014. Linguistically debatable or just plain wrong? In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 507-511, Baltimore, Maryland. Association for Computational Linguistics.",
|
| 1607 |
+
"Abhilasha Ravichander, Yonatan Belinkov, and Eduard Hovy. 2021. Probing the probing paradigm: Does probing accuracy entail task relevance? In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 3363-3377, Online. Association for Computational Linguistics.",
|
| 1608 |
+
"Dustin S. Stoltz and Marshall A. Taylor. 2021. Cultural cartography with word embeddings. Poetics, 88:101567. Measure Mohr Culture.",
|
| 1609 |
+
"Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R Thomas McCoy, Najoung Kim, Benjamin Van Durme, Sam Bowman, Dipanjan Das, and Ellie Pavlick. 2019. What do you learn from context? probing for sentence structure in contextualized word representations. In International Conference on Learning Representations.",
|
| 1610 |
+
"Ivan Vulic, Edoardo Maria Ponti, Robert Litschko, Goran Glavaš, and Anna Korhonen. 2020. Probing pretrained language models for lexical semantics. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7222-7240, Online. Association for Computational Linguistics.",
|
| 1611 |
+
"Anna Wierzbicka. 1996. Semantics: Primes and Universals. Oxford University Press, New York."
|
| 1612 |
+
],
|
| 1613 |
+
"bbox": [
|
| 1614 |
+
510,
|
| 1615 |
+
85,
|
| 1616 |
+
880,
|
| 1617 |
+
920
|
| 1618 |
+
],
|
| 1619 |
+
"page_idx": 10
|
| 1620 |
+
},
|
| 1621 |
+
{
|
| 1622 |
+
"type": "page_number",
|
| 1623 |
+
"text": "2685",
|
| 1624 |
+
"bbox": [
|
| 1625 |
+
480,
|
| 1626 |
+
928,
|
| 1627 |
+
519,
|
| 1628 |
+
940
|
| 1629 |
+
],
|
| 1630 |
+
"page_idx": 10
|
| 1631 |
+
},
|
| 1632 |
+
{
|
| 1633 |
+
"type": "text",
|
| 1634 |
+
"text": "A Appendix",
|
| 1635 |
+
"text_level": 1,
|
| 1636 |
+
"bbox": [
|
| 1637 |
+
114,
|
| 1638 |
+
84,
|
| 1639 |
+
238,
|
| 1640 |
+
99
|
| 1641 |
+
],
|
| 1642 |
+
"page_idx": 11
|
| 1643 |
+
},
|
| 1644 |
+
{
|
| 1645 |
+
"type": "text",
|
| 1646 |
+
"text": "Details on computing. All experiments were conducted on a MacBook Pro laptop using Python 3.8, with huggingface version 4.35.2, torch version 2.0.1, sklearnn version 1.2.1, numpy version 1.22.4 and scipy 1.10.0.",
|
| 1647 |
+
"bbox": [
|
| 1648 |
+
112,
|
| 1649 |
+
109,
|
| 1650 |
+
487,
|
| 1651 |
+
189
|
| 1652 |
+
],
|
| 1653 |
+
"page_idx": 11
|
| 1654 |
+
},
|
| 1655 |
+
{
|
| 1656 |
+
"type": "text",
|
| 1657 |
+
"text": "Hyperparameter estimation. Development set: 6 conditions sampled at random from the object features dataset: cities-danger, states-political, animals-wetness, cities-intelligence, animals-weight, names-age.",
|
| 1658 |
+
"bbox": [
|
| 1659 |
+
112,
|
| 1660 |
+
199,
|
| 1661 |
+
487,
|
| 1662 |
+
278
|
| 1663 |
+
],
|
| 1664 |
+
"page_idx": 11
|
| 1665 |
+
},
|
| 1666 |
+
{
|
| 1667 |
+
"type": "text",
|
| 1668 |
+
"text": "As said above, only hyperparameters that made a difference: averaging, always good, and mixing parameter alpha. Chosen values:",
|
| 1669 |
+
"bbox": [
|
| 1670 |
+
112,
|
| 1671 |
+
279,
|
| 1672 |
+
487,
|
| 1673 |
+
326
|
| 1674 |
+
],
|
| 1675 |
+
"page_idx": 11
|
| 1676 |
+
},
|
| 1677 |
+
{
|
| 1678 |
+
"type": "text",
|
| 1679 |
+
"text": "Best parameters:",
|
| 1680 |
+
"bbox": [
|
| 1681 |
+
132,
|
| 1682 |
+
328,
|
| 1683 |
+
260,
|
| 1684 |
+
342
|
| 1685 |
+
],
|
| 1686 |
+
"page_idx": 11
|
| 1687 |
+
},
|
| 1688 |
+
{
|
| 1689 |
+
"type": "list",
|
| 1690 |
+
"sub_type": "text",
|
| 1691 |
+
"list_items": [
|
| 1692 |
+
"$\\alpha$ for FIT+SD: GLoVE 0.02",
|
| 1693 |
+
"$\\alpha$ for FIT+S: GLoVE 0.05"
|
| 1694 |
+
],
|
| 1695 |
+
"bbox": [
|
| 1696 |
+
136,
|
| 1697 |
+
354,
|
| 1698 |
+
359,
|
| 1699 |
+
394
|
| 1700 |
+
],
|
| 1701 |
+
"page_idx": 11
|
| 1702 |
+
},
|
| 1703 |
+
{
|
| 1704 |
+
"type": "text",
|
| 1705 |
+
"text": "Embedding spaces, and sentence selection. The GLoVe embeddings used were trained on Common Crawl (42B tokens, 1.9M vocab, uncased, 300d vectors), downloaded from https://nlp.stanford.edu/projects/glove/.",
|
| 1706 |
+
"bbox": [
|
| 1707 |
+
112,
|
| 1708 |
+
407,
|
| 1709 |
+
489,
|
| 1710 |
+
487
|
| 1711 |
+
],
|
| 1712 |
+
"page_idx": 11
|
| 1713 |
+
},
|
| 1714 |
+
{
|
| 1715 |
+
"type": "text",
|
| 1716 |
+
"text": "In order to generate embeddings for contextualized instances of words in our datasets using BERT (bert-large-uncased) and RoBERTa (roberta-large) models (Devlin et al., 2019; Liu et al., 2019), we used sentences from the UkWac corpus (Baroni et al., 2009). We collected ten sentences for each word, when available. We filtered out sentences with more than 100 tokens in order to avoid including noisy contexts such as webpage menus crawled from the web. If a word had less than 10 occurrences in UkWac, we used as many sentences as were available. This was the case for 10 words in the Grand et al. dataset (nairobi (6), seoul (4), taipei (5), lahore (2), baghdad (9), peyton (9), tehran (4), johannesburg (4), jaime (5), karachi (7); and for one word (jazeera (6)) in the formality dataset. For hyphenated words in the Grand et al. dataset (e.g., new-york, south-carolina, south-dakota), we collected sentences where they occur without the hyphen.",
|
| 1717 |
+
"bbox": [
|
| 1718 |
+
112,
|
| 1719 |
+
488,
|
| 1720 |
+
489,
|
| 1721 |
+
809
|
| 1722 |
+
],
|
| 1723 |
+
"page_idx": 11
|
| 1724 |
+
},
|
| 1725 |
+
{
|
| 1726 |
+
"type": "page_number",
|
| 1727 |
+
"text": "2686",
|
| 1728 |
+
"bbox": [
|
| 1729 |
+
480,
|
| 1730 |
+
928,
|
| 1731 |
+
519,
|
| 1732 |
+
940
|
| 1733 |
+
],
|
| 1734 |
+
"page_idx": 11
|
| 1735 |
+
}
|
| 1736 |
+
]
|
2024/Adjusting Interpretable Dimensions in Embedding Space with Human Judgments/3179a1d7-f1eb-4b22-bb06-32f6ebe41c32_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/Adjusting Interpretable Dimensions in Embedding Space with Human Judgments/3179a1d7-f1eb-4b22-bb06-32f6ebe41c32_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f00225027c06987eed63119b5836dff9fb8f0be5447fb2279352d99e6723154a
|
| 3 |
+
size 726461
|
2024/Adjusting Interpretable Dimensions in Embedding Space with Human Judgments/full.md
ADDED
|
@@ -0,0 +1,307 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Adjusting Interpretable Dimensions in Embedding Space with Human Judgments
|
| 2 |
+
|
| 3 |
+
Katrin Erk
|
| 4 |
+
|
| 5 |
+
University of Texas at Austin
|
| 6 |
+
|
| 7 |
+
katrin.erk@utexas.edu
|
| 8 |
+
|
| 9 |
+
Marianna Apidianaki
|
| 10 |
+
|
| 11 |
+
University of Pennsylvania
|
| 12 |
+
|
| 13 |
+
marapi@seas.upenn.edu
|
| 14 |
+
|
| 15 |
+
# Abstract
|
| 16 |
+
|
| 17 |
+
Embedding spaces contain interpretable dimensions indicating gender, formality in style, or even object properties. This has been observed multiple times. Such interpretable dimensions are becoming valuable tools in different areas of study, from social science to neuroscience. The standard way to compute these dimensions uses contrasting seed words and computes difference vectors over them. This is simple but does not always work well. We combine seed-based vectors with guidance from human ratings of where words fall along a specific dimension, and evaluate on predicting both object properties like size and danger, and the stylistic properties of formality and complexity. We obtain interpretable dimensions with markedly better performance especially in cases where seed-based dimensions do not work well.
|
| 18 |
+
|
| 19 |
+
# 1 Introduction
|
| 20 |
+
|
| 21 |
+
Properties are commonly used in linguistics (Katz and Fodor, 1964; Jackendoff, 1990; Wierzbicka, 1996) as well as in psychology (Murphy, 2002) for representing word meanings and concepts. Those same properties are discoverable as interpretable dimensions in word embedding space, and can be used to explore the patterns and regularities encoded by Large Language Models (LLMs) (Mikolov et al., 2013b; Bolukbasi et al., 2016). Because LLMs are trained on texts from many different authors, we can view them as a compact repository of human utterances. This makes them an interesting resource for studying linguistic phenomena, analyzing social contexts of words, or as a stand-in for conceptual knowledge for interpreting brain voxels. Interpretable dimensions provide an attractive and simple way to access this resource (Grand et al., 2022; Kozlowski et al., 2019; Garí Soler and Apidianaki, 2020; Lucy et al., 2022). Compared to probing (Tenney et al., 2019; Conneau et al., 2018), interpretable dimensions allow for a direct explo
|
| 22 |
+
|
| 23 |
+

|
| 24 |
+
Figure 1: Interpretable dimensions for two object categories and features from Grand et al. (2022): clothes by wealth (left), animals by size (right). PCA projection of embeddings, with seed-based (blue) and FIT+S (red) dimensions.
|
| 25 |
+
|
| 26 |
+

|
| 27 |
+
|
| 28 |
+
ration of LLM embedding space, without external classifiers.
|
| 29 |
+
|
| 30 |
+
The most common way to obtain interpretable dimensions is to specify some seed pairs of antonyms, and take the average over their vector differences. But it is unclear what makes good seed pairs, or even how to test whether a particular property corresponds to a discernible dimension in embedding space. Antoniak and Mimno (2021) and Lucy et al. (2022) express concerns about the quality of commonly used hand-curated seed lexicons and propose metrics for evaluating seeds.
|
| 31 |
+
|
| 32 |
+
In this paper, we take a different approach to addressing the problem of "bad seeds" (Antoniak and Mimno, 2021): We propose a method to augment seed-based interpretable dimensions with additional guidance from human ratings, and we show that this augmentation is particularly impactful when the original seed-based dimensions are problematic. Figure 1 shows words for clothes with dimensions for wealth, and animal names with dimensions for size, blue for seed-based dimensions and red for our new fitted dimensions. The fitted dimensions correct overly high wealth estimates for
|
| 33 |
+
|
| 34 |
+
tights and stockings, and exaggerated size estimates for bee and ant.
|
| 35 |
+
|
| 36 |
+
While interpretable dimensions are useful both to social science and to cognitive science, there is an important difference between the fields: In social science, crowd-sourced datasets cannot be trusted in absolute terms, because annotators may have social biases of their own; this is the scenario that Antoniak and Mimno (2021) address. In cognitive science however, experimental data from human participants is central (though the method used to solicit data can have an influence on the outcome). We work with data from cognitive science here, so we can use human ratings to improve seed dimensions.
|
| 37 |
+
|
| 38 |
+
Our new method draws inspiration from a completely separate strand of research on interpretable dimensions, in the context of knowledge graph embeddings (Derrac and Schockaert, 2015; Jameel and Schockaert, 2016; Bouraoui et al., 2022). There, interpretable dimensions are learned using labeled training data. In the current paper, we use a similar learning strategy, and apply it to a combination of seed-based dimensions and labeled training data. We apply this technique to predict human ratings on object properties and stylistic aspects of words, and find that it improves performance particularly in cases where seed-based dimensions underperform, and that in contrast to seed-based dimensions it is able to make predictions at the same scale as the original ratings.
|
| 39 |
+
|
| 40 |
+
A larger issue is: When can we trust that an interpretable dimension shows us what the LLM truly "knows" about the property in question, that we are not misled by noise in our tool? This same worry also arises for probing classifiers (Hewitt and Liang, 2019; Belinkov, 2022).<sup>2</sup> We take one step towards addressing this issue: By combining two sources of information, seeds and human annotation, we hope to reduce noise present in either source.
|
| 41 |
+
|
| 42 |
+
# 2 Related Work
|
| 43 |
+
|
| 44 |
+
Interpretable dimensions in word embedding space have first been observed in NLP (Bolukbasi et al.,
|
| 45 |
+
|
| 46 |
+
2016), and the idea was then taken up in neuroscience and social science. Grand et al. (2022) discover dimensions for objects' scalar properties (e.g., DANGER, SIZE, SPEED, WEALTH). Kozlowski et al. (2019) identify dimensions including AFFLUENCE, GENDER, RACE and MORALITY, and show that concepts such as sports (e.g., golf, boxing) and music genres (e.g., opera, rap, jazz) are ordered along these axes in ways that match cultural stereotypes. Garg et al. (2018) explore ethnic stereotypes, and Stoltz and Taylor (2021) go as far as to propose a cultural cartography with word embeddings. An et al. (2018) use a large number of dimensions to characterize sentiment, which Kwak et al. (2021) apply to whole documents. Interpretable dimensions have also been used to represent linguistic notions, such as complexity and scalar adjective intensity (Garí Soler and Apidianaki, 2020, 2021b; Lyu et al., 2023). In our work, we explore dimensions addressing object properties in the Grand et al. (2022) datasets, and the abstract notions of formality and complexity.
|
| 47 |
+
|
| 48 |
+
In all these studies, dimensions are discovered using the seed-based methodology, where a few seed pairs of antonyms are specified and the dimension is computed as the average over vector differences for these pairs. This method is simpler than alternative representation approaches (e.g., the multi-task learning framework of Allaway and McKeown (2021)).
|
| 49 |
+
|
| 50 |
+
Seed pair selection has until now been ad hoc; but some choices, such as the selected word pairs, their number and order, and the way they are combined, have a strong impact on the quality of the derived dimension. Antoniak and Mimno (2021) address the "bad seeds" problem by measuring the coherence of each seed set pairing after mapping to the bias subspace: When all words in the vocabulary are projected onto the subspace, the two seed sets must be drawn as far apart as possible. Lucy et al. (2022) propose to measure the semantic axis' self-consistency using a leave-one-out approach, where each seed is compared to an axis constructed from the remaining seeds. A good seed, when left out, should be closer to the pole it belongs to.
|
| 51 |
+
|
| 52 |
+
In our approach, we do not test for seed quality. Instead, we use human ratings to improve on seed-based dimensions. Our approach is inspired by work on knowledge graph embeddings (Derrac and Schockaert, 2015; Jameel and Schockaert, 2016; Bouraoui et al., 2020). Drawing on the conceptual spaces of Gärdenfors (2014) for intuition,
|
| 53 |
+
|
| 54 |
+
Jameel and Schockaert (2016) learn embeddings of knowledge graph nodes that include interpretable dimensions for properties. Like us, they learn interpretable dimensions using labeled training data. Our objective function is an adaptation of their objective function, but still different as they also learn the space while we have a fixed space.
|
| 55 |
+
|
| 56 |
+
For constructing interpretable dimensions, most previous work used static embeddings (GloVe (Pennington et al., 2014) and word2vec (Mikolov et al., 2013a)). Recent work extends the methodology to contextualized representations (Garí Soler and Apidianaki, 2020; Lucy et al., 2022). We experiment with both kinds of embeddings.
|
| 57 |
+
|
| 58 |
+
# 3 Methods
|
| 59 |
+
|
| 60 |
+
# 3.1 Models
|
| 61 |
+
|
| 62 |
+
Seed-based dimensions (SEED model). The seed-based method is the most commonly used for computing interpretable dimensions (Bolukbasi et al., 2016; Kozlowski et al., 2019; Dev and Phillips, 2019; Garí Soler and Apidianaki, 2021b; Grand et al., 2022). A group of seed words are chosen which represent opposite ends of the dimension. For the DANGER dimension in Grand et al. (2022), for example, the seeds are {safe, harmless, calm} for the positive side and {dangerous, deadly, threatening} for the negative side. For each pair of a positive and negative seed word $p, n$ with vectors $\vec{p}, \vec{n}$ , the difference vector $\vec{p} - \vec{n}$ is computed; this is a first estimate of the interpretable dimension, but the vectors can differ substantially across seed pairs. To obtain a more stable estimate, the vector for the interpretable dimension is then computed as the average of the difference vectors from individual seed pairs. The rating of any word $a$ on the property $d$ with interpretable dimension $\vec{d}$ – in our example from above, DANGER – is then predicted as the scalar projection onto the dimension:
|
| 63 |
+
|
| 64 |
+
$$
|
| 65 |
+
| | \operatorname {p r o j} _ {\vec {a}} (\vec {d}) | | = \frac {\vec {a} \cdot \vec {d}}{| | \vec {d} | |}
|
| 66 |
+
$$
|
| 67 |
+
|
| 68 |
+
Fitted dimensions (FIT model). Whenever we have gold ratings on some dimension, like human judgments on degrees of danger of different animals (Grand et al., 2022) or gold ratings for complexity (Lyu et al., 2023), we can estimate a direction in embedding space that best matches the gold ratings. We adapted an idea from Jameel and Schockaert (2016), who learn an embedding space
|
| 69 |
+
|
| 70 |
+
for knowledge graph nodes in such a way that properties of the nodes correspond to dimensions in space. But rather than learning a new space, we need to use an existing space spanned by static or contextualized embeddings, because it is these spaces, and the patterns in human language use that they encode, that we want to analyze.
|
| 71 |
+
|
| 72 |
+
We proceed as follows. Let $W = \langle w_1, \dots, w_n \rangle$ be an annotated dataset of $n$ words with real-valued gold ratings $\hat{Y} = \langle \hat{y}_1, \dots, \hat{y}_n \rangle$ for some feature $f$ . Let $\vec{w}_i$ be the embedding of word $w_i$ . For the dimension $\vec{f}$ to be computed for feature $f$ in that same embedding space, we stipulate that the scalar projection of $\vec{w}_i$ onto $\vec{f}$ be proportional to the gold rating $\hat{y}_i$ . For example, say the gold rating (average human rating) of dolphin on the DANGER scale (on a scale of 1-5) is 2.1, and the gold rating of tiger is 4.9. Then we want the length of the projection $\mathrm{proj}_{\mathrm{dolphi}}(\mathrm{DANGER})$ to be proportional to $c_{\mathrm{DANGER}} \cdot 2.1 + b_{\mathrm{DANGER}}$ , and the length of the projection $\mathrm{proj}_{\mathrm{tiger}}(\mathrm{DANGER})$ to be proportional to $c_{\mathrm{DANGER}} \cdot 4.9 + b_{\mathrm{DANGER}}$ , for some weight and bias constants $c_{\mathrm{DANGER}}, b_{\mathrm{DANGER}} \in \mathbb{R}$ . So in general, we would like to have
|
| 73 |
+
|
| 74 |
+
$$
|
| 75 |
+
\frac {\vec {w _ {i}} \cdot \vec {f}}{| | \vec {f} | |} = c _ {f} \hat {y} _ {i} + b _ {f}
|
| 76 |
+
$$
|
| 77 |
+
|
| 78 |
+
We turn this into a loss function for computing a fitted dimension $f$ , dropping the denominator $||\vec{f}||$ :
|
| 79 |
+
|
| 80 |
+
$$
|
| 81 |
+
J _ {f} = \sum_ {w _ {i}} \left(\vec {w _ {i}} \cdot \vec {f} - c _ {f} \hat {y} _ {i} - b _ {f}\right) ^ {2}
|
| 82 |
+
$$
|
| 83 |
+
|
| 84 |
+
Fitted dimensions with seed words (FIT+SW model). We also test whether fitted dimensions can be guided by the seed words used to make seed-based dimensions. The first method follows the intuition of Antoniak and Mimno (2021) that the scalar projections of seed words should sit "far out" on an interpretable dimension, further than other words. The FIT+SW model simply extends the collection $W$ of rated words by the seed words. We make synthetic gold ratings for the seedwords, giving them extreme ratings: $\max (\hat{Y}) + o$ for positive seed words, and $\min (\hat{Y}) - o$ for negative seedwords, for an offset $o$ that is a hyperparameter. We optionally add a small amount of random jitter (sampled from the interval [0.001, 0.005]) so that the seed words don't all have the same rating.
|
| 85 |
+
|
| 86 |
+
Fitted dimensions with seed dimensions (FIT+SD model). Our second way of extending fitted dimensions with seed word information is built on
|
| 87 |
+
|
| 88 |
+
the idea that seed-based dimensions and human ratings both provide useful information for fitting an interpretable dimension, and that they should be combined. So we use an overall loss function of
|
| 89 |
+
|
| 90 |
+
$$
|
| 91 |
+
J = \alpha J _ {f} + (1 - \alpha) J _ {d} (D)
|
| 92 |
+
$$
|
| 93 |
+
|
| 94 |
+
where $J_{f}$ is the loss function from above, and $\alpha$ is a hyperparameter. $J_{d}(D)$ is a loss that measures distance of the fitted dimension $\vec{f}$ from a set $D$ of seed-based dimensions, defined as
|
| 95 |
+
|
| 96 |
+
$$
|
| 97 |
+
J _ {d} (D) = \sum_ {d \in D} 1 - \operatorname {c o s i n e} (\vec {d}, \vec {f})
|
| 98 |
+
$$
|
| 99 |
+
|
| 100 |
+
Fitted dimensions with seeds as words and dimensions (FIT+S model). Our final model uses seeds both as seed words, as in FIT+SW, and as seed dimensions, as in FIT+SD.
|
| 101 |
+
|
| 102 |
+
Baselines. We compare our methods to a baseline which ranks words by frequency (FREQ). Frequency has been a strong baseline for complexity and formality in previous work, given that rare words tend to be more complex than frequently used words (Brooke et al., 2010). We use log-transformed frequency counts in the Google N-gram corpus (Brants and Franz, 2006). We also use a random baseline, which assigns to each word a randomly selected score in the range [-3, 3].<sup>3</sup>
|
| 103 |
+
|
| 104 |
+
# 3.2 Evaluation metrics
|
| 105 |
+
|
| 106 |
+
In contrast to interpretable dimensions computed from seed words, the FIT models use training data: words with human ratings for the property in question. When we evaluate these models, we use up some of the data for training, leaving less for testing. To mitigate the issue, we do cross-validation, and we focus on evaluation metrics that work well with smaller test sets. We do not use the correlation metrics used in Gari Soler and Apidianaki (2020) and Grand et al. (2022), as significance tests become unreliable with small datasets. Instead, we use a variant of pairwise rank accuracy, a metric used in Gari Soler and Apidianaki (2020), Cocos et al. (2018), and Grand et al. (2022).
|
| 107 |
+
|
| 108 |
+
Pairwise rank accuracy measures the percentage of pairs of words whose ordering in the gold ratings is the same as in the model predictions. We define a new variant which we call extended pairwise rank accuracy, $\mathbf{r}^{+}$ -acc, which measures pairwise
|
| 109 |
+
|
| 110 |
+
rank accuracy among words in the test set, and additionally pairwise rank accuracy between each test word and each training word. For example, if tiger and butterfly are in the training set for DANGER, and cat is in the test, we check whether the score assigned to cat ranks it after tiger and before butterfly. This metric gives us more evidence on the quality of predictions than pairwise rank accuracy on its own because it includes more comparisons, thus making the metric less sparse. Let $<_{g}, <_{m}$ be two complete orderings of the words in $W$ , the gold and model orderings, respectively. For words $w, w'$ in $W$ , we define an auxiliary function $rm$ for "rank match":
|
| 111 |
+
|
| 112 |
+
$$
|
| 113 |
+
r m _ {< g, < m} (w, w ^ {\prime}) = \left\{ \begin{array}{l l} 1 \text {i f f} (w < _ {g} w ^ {\prime} \wedge w < _ {m} w ^ {\prime}) \\ \quad \vee (w > _ {g} w ^ {\prime} \wedge w > _ {m} w ^ {\prime}) \\ 0 \text {e l s e} \end{array} \right.
|
| 114 |
+
$$
|
| 115 |
+
|
| 116 |
+
Then standard pairwise rank accuracy on $W = \langle w_1, \ldots, w_n \rangle$ is defined as
|
| 117 |
+
|
| 118 |
+
$$
|
| 119 |
+
\begin{array}{l} \mathrm {r - a c c} _ {W} (< _ {g}, < _ {m}) = \frac {1}{n (n - 1)} \\ \sum_ {1 \leq i < j \leq n} r m _ {< g, < m} \left(w _ {i}, w _ {j}\right) \\ \end{array}
|
| 120 |
+
$$
|
| 121 |
+
|
| 122 |
+
Now assume that $T = \{k_1, \dots, k_\ell\}$ , with $k_j \in \{1, \dots, n\}$ for all $j$ , is the set of test word indices among the indices of $W$ . Assume both orderings, $<_g$ and $<_m$ , are defined on all of $W$ . Then our new extended pairwise rank accuracy is
|
| 123 |
+
|
| 124 |
+
$$
|
| 125 |
+
\begin{array}{l} \mathrm {r} ^ {+} - \operatorname {a c c} _ {W} (< _ {g}, < _ {m}) = \frac {1}{\ell (\ell - 1) + \ell (n - \ell)} \\ \sum_ {1 \leq i < j \leq \ell} r m _ {< g, < m} \left(w _ {k _ {i}}, w _ {k _ {j}}\right) + \\ \sum_ {i \in \{1, \dots , \ell \}, j \notin T} r m _ {< g, < m} \left(w _ {k _ {i}}, w _ {j}\right) \\ \end{array}
|
| 126 |
+
$$
|
| 127 |
+
|
| 128 |
+
The first half of this formula measures pairwise rank accuracy among members of the test set; the second half measures rank accuracy of test words with respect to training words.
|
| 129 |
+
|
| 130 |
+
Pairwise rank accuracy and extended pairwise rank accuracy are similar to correlation metrics in that they measure to what extent gold and model-based rankings agree. And in fact all three metrics are highly correlated: We tested correlation between the three metrics on seed-based dimensions for the Grand et al. (2022) data and obtained highly significant correlations $(p\ll 0.0001,r = 0.972$ for pairwise rank accuracy, $r = 0.971$ for extended pairwise rank accuracy).4
|
| 131 |
+
|
| 132 |
+
As a second evaluation metric, we test how far off from the gold ratings each individual predicted rating is. We use the mean square error (MSE) of predicted ratings compared to gold ratings. We can do this because all FIT models learn to predict ratings on the same scale as the gold ratings. In order to apply the same evaluation to the SEED model and the baselines, we simply use linear regression to map model predictions to ratings on the same scale as the gold ratings. Linear regression models are fit on the training portion of each data set, so that test words remain truly unseen.
|
| 133 |
+
|
| 134 |
+
# 3.3 Data and Vectors
|
| 135 |
+
|
| 136 |
+
We use the ratings collected by Grand et al. (2022) which describe properties of objects in nine categories: animals, clothing, professions, weather phenomena, sports, mythological creatures, world cities, states of the United States, and first names. Each category is matched with a subset of these semantic features: age, arousal, cost, danger, gender, intelligence, location (indoors vs. outdoors), partisanship (liberal vs. conservative), religiosity, size, speed, temperature, valence, volume, wealth, weight, and wetness. For style, we use datasets released by Pavlick and Nenkova (2015) which contain words and phrases with human ratings of formality and complexity. For each dimension, we sample words with high annotation confidence (i.e. where annotators agreed about the word being complex or formal): We calculate the mean standard deviation for words in our sample, and keep words where deviation between human scores is lower than that mean. The filtered datasets contain 1,160 words for complexity, and 1,274 words for formality.
|
| 137 |
+
|
| 138 |
+
We extract seed words from two other datasets released by Pavlick and Nenkova (2015) which contain pairwise paraphrase judgments of formality and complexity. Annotations reflect which phrase in a pair (e.g., letter-communication, largely-extensively) is more complex or formal than the other. We collect five pairs of words for each style
|
| 139 |
+
|
| 140 |
+
5The data is available on the Open Science Framework at https://osf.io/5r2sz/. No license information is given in the repository.
|
| 141 |
+
Most categories consist of 50 items.
|
| 142 |
+
7We sample words with more than three characters to exclude pronouns, articles, numerals, and multiword phrases.
|
| 143 |
+
The data is available at https://cs.brown.edu/people/epavlick/data.html, under "Style Lexicons: Human and automatic scores of formality and complexity for words, phrases, and sentences". No license for the data is given.
|
| 144 |
+
|
| 145 |
+
dimension for which inter-rater agreement is high. For complexity, we obtain the seed pairs work - employment, further - subsequently, strong - powerful, train - railway, shown - indicated, where the first member of each pair is the negative seed (the simpler word). For formality, we used winner - recipient, terrible - disastrous, membership - affiliation, highest - paramount, test - verify, where again the first member of each pair is the negative seed (the less formal word).
|
| 146 |
+
|
| 147 |
+
Following Grand et al. (2022), we averaged over human subject ratings for each datapoint, then normalized ratings to z-scores separately for each pair of a category and property. For formality and complexity, ratings were also converted to z-scores.
|
| 148 |
+
|
| 149 |
+
As embeddings, we use the same representations as Grand et al. (2022), pre-trained GLoVE embeddings trained on the Common Crawl (42b tokens), 300 dimensions, uncased (Pennington et al., 2014). We also use contextualized representations from the BERT (bert-large-uncased) and RoBERTa (roberta-large) models (Devlin et al., 2019; Liu et al., 2019) with sentences from UkWac (Baroni et al., 2009). For each word instance, we average its contextualized representations from the top 4 layers of the model. If the word is split into multiple wordpieces during tokenization, we average the representations of its pieces in order to obtain a single type-level representation for each word, as is common practice in semantic probing studies (Bommasani et al., 2020; Vulic et al., 2020; Garí Soler and Apidianaki, 2021a). The final representation for a word is the average of its representations from the available sentences. Aggregating representations across multiple contexts is the most common approach for creating word type-level embeddings from contextualized representations which serves to null out, to some extent, the impact of specific contexts (Apidianaki, 2022). It is possible to use more sophisticated ways for sentence selection, such as language modeling criteria (Garí Soler and Apidianaki, 2020) and exclusion of contexts where antonyms could occur (Lucy et al., 2022). However applying such sophisticated context selection methods is not always
|
| 150 |
+
|
| 151 |
+
better than random selection, which might be due to the skewed distribution of word senses and the stronger presence of the most frequent sense of a word in randomly selected sentences (Kilgarriff, 2004; McCarthy et al., 2004).
|
| 152 |
+
|
| 153 |
+
# 4 Results and discussion
|
| 154 |
+
|
| 155 |
+
In this section we evaluate the different interpretable dimension models from Section 3.1 on the tasks of predicting human ratings of object properties (Grand et al., 2022), and human ratings of the complexity and formality of words (Pavlick and Nenkova, 2015).
|
| 156 |
+
|
| 157 |
+
Experimental setup. To make the most of the limited available data, all models were tested in 5-fold cross-validation. In addition, all models that involve randomness (all except SEED) were re-run three times with different random seeds. For Grand et al. object features, we first compute mean $r^+$ -acc and median MSE for each category/property pair (averaging over cross-validation runs and random seeds), then we report averages over those. For formality and complexity we report overall mean $r^+$ -acc and median MSE. $^{12}$ Note that because we split the data into training and test using cross-validation, the numbers reported in this paper are not comparable with those reported in earlier papers on the same dataset. We do however compute SEED dimensions with the same cross-validation setup as the FIT-based dimensions, so that the numbers that we report are comparable to each other.
|
| 158 |
+
|
| 159 |
+
To set hyperparameters, we sample 6 category/property pairs from the Grand et al. data as development set. Hyperparameters were optimized once per embedding space; there was no separate hyperparameter optimization for the formality and complexity data.13 Overall, we find that low values of $\alpha$ work well, and that it is beneficial to input a single averaged seed dimension to FIT+SD and FIT+S, rather than individual seed dimensions. The choice of offset and jitter does not matter. More details on hyperparameters and the development set can be found in the Appendix. Results reported below for Grand et al. data are for all category/property pairs not in the development set.
|
| 160 |
+
|
| 161 |
+
Overall performance. Overall results are shown for object properties in Table 1 and for stylistic
|
| 162 |
+
|
| 163 |
+
features in Table 2. Looking at object properties first, and focusing on extended rank accuracy, the FIT model by itself is not very good, and adding seeds as words (FIT+SW) does not help. FIT+SD is better, and outperforms SEED slightly, but the FIT+S model, which computes fitted dimensions using seed information both as words and dimensions, shows the best performance, outperforming SEED strongly. In terms of MSE, even medians are very high for the SEED model, so many seed-based dimensions were not able to predict ratings on the same scale as the gold ratings. MSE is much lower for both fitted models that make use of seed dimensions, especially FIT+S.[14] Looking at the baselines, the FIT and FIT+SW models with BERT and RoBERTa underperform the frequency baseline, and are on par with random guessing. The frequency baseline is somewhat higher than random, though it is not entirely clear what kind of signal for object properties can be derived from word frequency.
|
| 164 |
+
|
| 165 |
+
On the stylistic data, the relative performance of the fitted models is similar, but here they mostly do not outperform the SEED dimensions. Overall performance of the SEED dimensions is higher here, which raises the question if fitted models help in particular when seed-based dimensions do not perform well; we explore this further below. Looking at MSE, we confirm that fitted models that use seed dimensions have much lower error than the other models. Comparing embedding spaces, we see consistently the best performance with GLOVE embeddings. The BERT and RoBERTa FIT and FIT+SW models in particular are again at the level of the random baseline. The frequency baseline is reasonably strong, matching previous findings.
|
| 166 |
+
|
| 167 |
+
Fitted dimensions, by themselves, are underdetermined by human ratings. The FIT model, which computes fitted dimensions from human ratings only, does not perform well, and we suspect that the size of the embedding space allows for too many ways to fit a dimension to ratings, causing the model to overfit. To test this, we first computed FIT dimensions, for Grand et al. object features, from all human ratings, obtaining perfectly fit dimensions in every single case. We next train dimensions on all human ratings but scramble the word/rating pairs, making them nonsensical. Again we obtain perfectly fit dimensions in every single
|
| 168 |
+
|
| 169 |
+
<table><tr><td colspan="2"></td><td colspan="2">SEED</td><td colspan="2">FIT</td><td colspan="2">FIT+SW</td><td colspan="2">FIT+SD</td><td colspan="2">FIT+S</td></tr><tr><td rowspan="2">GLoVE</td><td>r+acc</td><td>0.64</td><td>(0.1)</td><td>0.54</td><td>(0.03)</td><td>0.53</td><td>(0.03)</td><td>0.65</td><td>(0.1)</td><td>0.80</td><td>(0.06)</td></tr><tr><td>MSE</td><td colspan="2">>1000 (>1000)</td><td>113.2</td><td>(111.7)</td><td>177.1</td><td>(125.4)</td><td>89.6</td><td>(199.6)</td><td>0.7</td><td>(0.36)</td></tr><tr><td rowspan="2">BERT</td><td>r+acc</td><td>0.64</td><td>(0.1)</td><td>0.51</td><td>(0.03)</td><td>0.52</td><td>(0.03)</td><td>0.66</td><td>(0.10)</td><td>0.71</td><td>(0.04)</td></tr><tr><td>MSE</td><td colspan="2">>1000 (>1000)</td><td>417.4</td><td>(271.7)</td><td>597.4</td><td>(525.6)</td><td>115.0</td><td>(437.4)</td><td>2.0</td><td>(0.6)</td></tr><tr><td rowspan="2">RoBERTa</td><td>r+acc</td><td>0.57</td><td>(0.08)</td><td>0.51</td><td>(0.03)</td><td>0.51</td><td>(0.03)</td><td>0.60</td><td>(0.1)</td><td>0.69</td><td>(0.04)</td></tr><tr><td>MSE</td><td colspan="2">>1000 (>1000)</td><td>392.5</td><td>(291.3)</td><td>458.0</td><td>(284.3)</td><td>125.2</td><td>(270.5)</td><td>1.9</td><td>(0.6)</td></tr></table>
|
| 170 |
+
|
| 171 |
+
Table 1: Results on object properties: Extended rank accuracy (abbreviated $\mathbf{r}^{+}$ -acc) and Mean Squared Error (MSE), averaged over category/property pairs. In brackets: Standard error. Shown for 3 embedding spaces. Bolded: best performance for each embedding.
|
| 172 |
+
|
| 173 |
+
<table><tr><td rowspan="2" colspan="2"></td><td colspan="5">Complexity</td><td colspan="6">Formality</td></tr><tr><td>SEED</td><td>FIT</td><td>FIT+SW</td><td>FIT+SD</td><td>FIT+S</td><td rowspan="7">FREQ r+-acc: 0.65 RAND r+-acc: 0.50</td><td>SEED</td><td>FIT</td><td>FIT+SW</td><td>FIT+SD</td><td>FIT+S</td></tr><tr><td rowspan="2">GLoVE</td><td>r+-acc</td><td>0.74</td><td>0.59</td><td>0.57</td><td>0.76</td><td>0.72</td><td>0.73</td><td>0.53</td><td>0.37</td><td>0.68</td><td>0.69</td></tr><tr><td>MSE</td><td>31.5</td><td>24.3</td><td>75.1</td><td>1.5</td><td>1.2</td><td>60.5</td><td>396.5</td><td>285.7</td><td>1.8</td><td>1.6</td></tr><tr><td rowspan="2">BERT</td><td>r+-acc</td><td>0.69</td><td>0.52</td><td>0.52</td><td>0.71</td><td>0.72</td><td>0.64</td><td>0.52</td><td>0.51</td><td>0.64</td><td>0.69</td></tr><tr><td>MSE</td><td>123.5</td><td>437.2</td><td>724.0</td><td>3.6</td><td>2.4</td><td>215.6</td><td>216.3</td><td>617.1</td><td>7.8</td><td>3.2</td></tr><tr><td rowspan="2">RoBERTa</td><td>r+-acc</td><td>0.74</td><td>0.51</td><td>0.51</td><td>0.71</td><td>0.73</td><td>0.67</td><td>0.53</td><td>0.53</td><td>0.66</td><td>0.71</td></tr><tr><td>MSE</td><td>82.7</td><td>591.9</td><td>>1000</td><td>3.9</td><td>2.3</td><td>223.0</td><td>325.0</td><td>778.1</td><td>7.3</td><td>2.4</td></tr></table>
|
| 174 |
+
|
| 175 |
+
Table 2: Results on formality and complexity: Extended rank accuracy $(\mathbf{r}^{+}$ -acc) and Mean Squared Error (MSE). Shown for 3 embedding spaces. Bolded: best performance for each embedding.
|
| 176 |
+
|
| 177 |
+

|
| 178 |
+
Figure 2: Increase in $r^+$ -acc for FIT+S over SEED, object properties grouped by performance of SEED.
|
| 179 |
+
|
| 180 |
+
case, which confirms our suspicion. The picture that emerges is that FIT by itself does not have enough information to fit a good dimension and overfits to the training data. The seed information provided to FIT+SW, FIT+SD and FIT+S gives the models the additional guidance needed to make good use of the human ratings, and the combination of seeds and human ratings on words leads to overall better dimensions – at least in some cases. We next ask which cases those are.
|
| 181 |
+
|
| 182 |
+
Human ratings help most when seed-based dimensions underperform. Comparing $r^+$ -acc values for seed-based dimensions and FIT+S dimensions on the object property data, we find that FIT+S improves over SEED in every single one of the 50 category/property pairs. $^{15}$ The performance
|
| 183 |
+
|
| 184 |
+
increase is highest when performance of the seed-based dimensions is lowest, as shown in Figure 2: For the $20\%$ of category/property pairs with lowest SEED performance, average improvement is 27.3 points, while for the $20\%$ of category/property pairs with the highest SEED performance, average improvement is 1.7 points. This could explain the lack of improvement achieved on stylistic features, as SEED already performs well on this data.
|
| 185 |
+
|
| 186 |
+
Table 3 further zooms in on the object feature data, showing performance on some category/property pairs with low, medium, and high performance of the SEED dimensions. We see that FIT and $\mathrm{FIT + SW}$ underperform throughout. $\mathrm{FIT + S}$ shows the overall best performance, but the improvement over SEED is particularly high for the first group of conditions, where SEED dimensions get no traction on the data. $\mathrm{FIT + SD}$ shows good extended rank accuracy on the conditions with medium to high SEED performance, but not on the conditions that are particularly poorly modeled by SEED.
|
| 187 |
+
|
| 188 |
+
FIT+S models are the only ones that predict ratings on the gold scale. We saw above that median MSE values are extremely high for many models, especially for SEED. We now take a closer look, in particular we want to know how often we obtain MSE values that are extremely far off from the gold ratings. We again focus on the object fea
|
| 189 |
+
|
| 190 |
+
<table><tr><td rowspan="2">Category, Feature</td><td colspan="5">r+ -acc</td></tr><tr><td>SEED</td><td>FIT</td><td>FIT+SW</td><td>FIT+SD</td><td>FIT+S</td></tr><tr><td>sports/speed</td><td>0.46</td><td>0.55</td><td>0.56</td><td>0.52</td><td>0.78</td></tr><tr><td>states/cost</td><td>0.46</td><td>0.5</td><td>0.5</td><td>0.42</td><td>0.83</td></tr><tr><td>cities/arousal</td><td>0.47</td><td>0.52</td><td>0.5</td><td>0.51</td><td>0.82</td></tr><tr><td>animals/intelligence</td><td>0.48</td><td>0.55</td><td>0.54</td><td>0.5</td><td>0.79</td></tr><tr><td>clothing/cost</td><td>0.48</td><td>0.52</td><td>0.51</td><td>0.55</td><td>0.76</td></tr><tr><td>clothing/wealth</td><td>0.62</td><td>0.52</td><td>0.53</td><td>0.6</td><td>0.82</td></tr><tr><td>states/wealth</td><td>0.62</td><td>0.55</td><td>0.56</td><td>0.64</td><td>0.82</td></tr><tr><td>weather/temperature</td><td>0.69</td><td>0.56</td><td>0.52</td><td>0.66</td><td>0.76</td></tr><tr><td>animals/danger</td><td>0.7</td><td>0.6</td><td>0.57</td><td>0.76</td><td>0.84</td></tr><tr><td>clothing/age</td><td>0.71</td><td>0.52</td><td>0.55</td><td>0.71</td><td>0.8</td></tr><tr><td>weather/danger</td><td>0.79</td><td>0.55</td><td>0.54</td><td>0.7</td><td>0.82</td></tr><tr><td>clothing/gender</td><td>0.81</td><td>0.56</td><td>0.53</td><td>0.8</td><td>0.82</td></tr><tr><td>sports/gender</td><td>0.81</td><td>0.61</td><td>0.56</td><td>0.81</td><td>0.84</td></tr><tr><td>professionals/gender</td><td>0.85</td><td>0.56</td><td>0.56</td><td>0.87</td><td>0.86</td></tr><tr><td>names/gender</td><td>0.87</td><td>0.56</td><td>0.51</td><td>0.87</td><td>0.87</td></tr></table>
|
| 191 |
+
|
| 192 |
+

|
| 193 |
+
Figure 3: MSE distributions for runs of different models. y-axis: count of runs.
|
| 194 |
+
|
| 195 |
+
ture data as we there have many conditions that we can compare. Figure 3 shows, for each model, how many runs had MSE values in the ranges of $< 2, 2 - 10, 10 - 100$ , and $> 100$ . Recall that gold ratings are z-scores, so they tend to be in a range of -2 to 2. We again only use the category/property pairs that are not in the development set, but now count separately each cross-validation run and each random seed. We see that many runs of SEED, FIT and FIT+SW have very high MSE values. In FIT+SD we first see a considerable percentage of runs with MSE values below 2 (the blue bar comprises $20\%$ of runs for this model), but strikingly, $97\%$ runs of FIT+S have MSE values below 2, and all have values below 10. So this model is much more consistent than the other models, and in fact is highly consistent in fitting dimensions that deliver predictions in the range of the gold data.
|
| 196 |
+
|
| 197 |
+
Zooming in: Examples of predictions. We take a closer look at two kinds of object properties: clothes by wealth, and animals by size. In order to obtain sufficiently many test datapoints to look at, we divide the data into 2/3 training and 1/3
|
| 198 |
+
|
| 199 |
+

|
| 200 |
+
Figure 4: Clothes rated for wealth (left), animals rated for size (right). Gold ratings: dark purple. SEED predictions: red. FIT+S prediction: light blue. Datapoints ordered by gold rank.
|
| 201 |
+
|
| 202 |
+

|
| 203 |
+
|
| 204 |
+
Table 3: Detail results for Grand et al. by SEED performance: lowest performance (top box), middling performance (middle), best performance (bottom).
|
| 205 |
+
|
| 206 |
+
<table><tr><td>Gold</td><td>SEED</td><td>FIT+S</td></tr><tr><td>bee</td><td>chipmunk</td><td>butterfly</td></tr><tr><td>ant</td><td>hamster</td><td>bee</td></tr><tr><td>butterfly</td><td>monkey</td><td>chipmunk</td></tr><tr><td>goldfish</td><td>butterfly</td><td>bird</td></tr><tr><td>hamster</td><td>goldfish</td><td>hamster</td></tr><tr><td>chipmunk</td><td>dog</td><td>ant</td></tr><tr><td>bird</td><td>bee</td><td>crow</td></tr><tr><td>crow</td><td>bird</td><td>goldfish</td></tr><tr><td>dog</td><td>seal</td><td>seal</td></tr><tr><td>monkey</td><td>crow</td><td>dog</td></tr><tr><td>seal</td><td>ant</td><td>monkey</td></tr><tr><td>mammoth</td><td>whale</td><td>whale</td></tr><tr><td>whale</td><td>mammoth</td><td>mammoth</td></tr></table>
|
| 207 |
+
|
| 208 |
+
Table 4: Comparing word rankings by humans, SEED dimensions, and FIT+S dimensions: Animals by size. Italicized: 3 words with highest error in ranking.
|
| 209 |
+
|
| 210 |
+
test (as opposed to the 1/5 we use with 5-fold cross-validation). Figure 1 shows the test data words, along with seed-based and FIT+S dimensions, downprojected into 2 dimensions using PCA. For the same datapoints, Figure 4 plots gold ratings, SEED predictions, and FIT+S predictions. This plot illustrates how the SEED predictions are on a much larger scale than gold ratings, while FIT+S is the only model whose predictions stay on the same scale. (The next to last datapoint in animals/size is mammoth, which seed largely overestimates – maybe because mammoth is also an adjective indicating gargantuan size.) Tables 4 and 5 show how the test data words for animals by size, and for clothes by wealth, are ranked by humans, by the SEED dimension, and by the FIT+S dimension. The italicized words are the three words whose model rank is furthest off from their gold rank. For the animals data, both models mis-rank ant, and overall seem to struggle more with smaller animals. Among the clothes, both models overestimate the wealth projected by wearing hats.
|
| 211 |
+
|
| 212 |
+
<table><tr><td>Gold</td><td>SEED</td><td>FIT+S</td></tr><tr><td>sweatshirt</td><td>shorts</td><td>shorts</td></tr><tr><td>shorts</td><td>sweatshirt</td><td>boots</td></tr><tr><td>belt</td><td>belt</td><td>bikini</td></tr><tr><td>boots</td><td>blouse</td><td>tights</td></tr><tr><td>hat</td><td>boots</td><td>skirt</td></tr><tr><td>tights</td><td>swimsuit</td><td>swimsuit</td></tr><tr><td>bikini</td><td>skirt</td><td>stockings</td></tr><tr><td>swimsuit</td><td>trousers</td><td>trousers</td></tr><tr><td>skirt</td><td>bikini</td><td>loafers</td></tr><tr><td>blouse</td><td>robe</td><td>blouse</td></tr><tr><td>knickers</td><td>cuff</td><td>belt</td></tr><tr><td>dress</td><td>knickers</td><td>knickers</td></tr><tr><td>collar</td><td>hat</td><td>dress</td></tr><tr><td>trousers</td><td>collar</td><td>sweatshirt</td></tr><tr><td>stockings</td><td>vest</td><td>robe</td></tr><tr><td>robe</td><td>tights</td><td>collar</td></tr><tr><td>vest</td><td>loafers</td><td>cuff</td></tr><tr><td>cuff</td><td>stockings</td><td>vest</td></tr><tr><td>loafers</td><td>dress</td><td>hat</td></tr><tr><td>gown</td><td>gown</td><td>tuxedo</td></tr><tr><td>tuxedo</td><td>tuxedo</td><td>gown</td></tr></table>
|
| 213 |
+
|
| 214 |
+
Table 5: Comparing word rankings by humans, SEED dimensions, and FIT+S dimensions: Clothes by wealth. Italicized: 3 words with highest error in ranking.
|
| 215 |
+
|
| 216 |
+
# 5 Conclusion
|
| 217 |
+
|
| 218 |
+
In this paper we have proposed a method for constructing high quality interpretable dimensions in embedding spaces. We show that by combining seed-based vectors with guidance from human ratings about properties, it is possible to induce better dimensions than with the seed-based methodology alone. We expect the proposed dimensions to be useful in various areas of study, including linguistics, psychology, and social science.
|
| 219 |
+
|
| 220 |
+
For the moment, the proposed dimensions address one property at a time. In future work, we are planning to explore multifaceted properties which would be better represented through multiple dimensions. Aside from a more elaborate description of these properties, a space of multiple interpretable dimensions will offer a rich context of comparison for words that might be similar in some respect and not in others (e.g., tiger and spider with respect to DANGER and SIZE).
|
| 221 |
+
|
| 222 |
+
# 6 Limitations
|
| 223 |
+
|
| 224 |
+
In our experiments we use English models and data. The seed-based methodology has been shown to work well in other languages, so an extension of the proposed methodology to other languages is possible. A limitation regarding this extension is the lack of human ratings which are needed for calculating the fitted dimensions. A possible mitigation would be to translate the annotated English
|
| 225 |
+
|
| 226 |
+
data into other languages.
|
| 227 |
+
|
| 228 |
+
The ratings we used in our study were averages over individual human ratings, possibly obscuring legitimate differences between raters (Plank et al., 2014). Another limitation of the human ratings used in this study is that they were out of context, possibly obscuring effects of topic and polysemy.
|
| 229 |
+
|
| 230 |
+
There are many different ways to use contextualized embeddings. We have averaged over all token representations generated by BERT and RoBERTa for a word in a sentence pool, and used the top 4 layers of the models. It is possible that BERT and RoBERTa would do better, or at least differently, if other model layers (or layer combinations) were used.
|
| 231 |
+
|
| 232 |
+
Our approach is not at all compute intensive. All computations were done on a laptop.
|
| 233 |
+
|
| 234 |
+
# Acknowledgements
|
| 235 |
+
|
| 236 |
+
We would like to thank the anonymous reviewers for their valuable feedback. This research is supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via the HIA-TUS Program contract #2022-22072200005. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.
|
| 237 |
+
|
| 238 |
+
# References
|
| 239 |
+
|
| 240 |
+
Emily Allaway and Kathleen McKeown. 2021. A unified feature representation for lexical connotations. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2145-2163, Online. Association for Computational Linguistics.
|
| 241 |
+
Jisun An, Haewoon Kwak, and Yong-Yeol Ahn. 2018. SemAxis: A lightweight framework to characterize domain-specific word semantics beyond sentiment. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2450-2461, Melbourne, Australia. Association for Computational Linguistics.
|
| 242 |
+
Maria Antoniak and David Mimno. 2021. Bad seeds: Evaluating lexical methods for bias measurement. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the
|
| 243 |
+
|
| 244 |
+
11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1889-1904, Online. Association for Computational Linguistics.
|
| 245 |
+
Marianna Apidianaki. 2022. From Word Types to Tokens and Back: A Survey of Approaches to Word Meaning Representation and Interpretation. Computational Linguistics, 49(2):465-523.
|
| 246 |
+
Marco Baroni, Silvia Bernardini, Adriano Ferraresi, and Eros Zanchetta. 2009. The WaCky wide web: a collection of very large linguistically processed web-crawled corpora. Journal of Language Resources and Evaluation, 43(3):209-226.
|
| 247 |
+
Yonatan Belinkov. 2022. Probing classifiers: Promises, shortcomings, and advances. Computational Linguistics, 48(1):207-219.
|
| 248 |
+
Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings. In Advances in Neural Information Processing Systems 29, pages 4349-4357, Barcelona, Spain.
|
| 249 |
+
Rishi Bommasani, Kelly Davis, and Claire Cardie. 2020. Interpreting Pretrained Contextualized Representations via Reductions to Static Embeddings. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4758-4781, Online. Association for Computational Linguistics.
|
| 250 |
+
Zied Bouraoui, José Camacho-Collados, and Steven Schockaert. 2020. Inducing Relational Knowledge from BERT. In *The Thirty-Fourth AAAI Conference on Artificial Intelligence*, pages 7456-7463, New York, NY, USA. AAAI Press.
|
| 251 |
+
Zied Bouraoui, Víctor Gutiérrez-Basulto, and Steven Schockaert. 2022. Integrating ontologies and vector space embeddings using conceptual spaces. In International Research School in Artificial Intelligence in Bergen (AIB 2022), volume 99 of Open Access Series in Informatics (OASIs), pages 3:1-3:30, Dagstuhl, Germany. Schloss Dagstuhl – Leibniz-Zentrum für Informatik.
|
| 252 |
+
Thorsten Brants and Alex Franz. 2006. Web 1T 5-gram Version 1. In LDC2006T13, Philadelphia, Pennsylvania. Linguistic Data Consortium.
|
| 253 |
+
Julian Brooke, Tong Wang, and Graeme Hirst. 2010. Automatic acquisition of lexical formality. In *Coling* 2010: Posters, pages 90–98, Beijing, China. Coling 2010 Organizing Committee.
|
| 254 |
+
Anne Cocos, Skyler Wharton, Ellie Pavlick, Marianna Apidianaki, and Chris Callison-Burch. 2018. Learning scalar adjective intensity from paraphrases. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1752-1762, Brussels, Belgium. Association for Computational Linguistics.
|
| 255 |
+
|
| 256 |
+
Alexis Conneau, German Kruszewski, Guillaume Lample, Loic Barrault, and Marco Baroni. 2018. What you can cram into a single $\$ \& !\#$ * vector: Probing sentence embeddings for linguistic properties. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2126-2136, Melbourne, Australia. Association for Computational Linguistics.
|
| 257 |
+
Joaquín Derrac and Steven Schockaert. 2015. Inducing semantic relations from conceptual spaces: A data-driven approach to plausible reasoning. Artificial Intelligence, 228:66-94.
|
| 258 |
+
Sunipa Dev and Jeff M Phillips. 2019. Attenuating Bias in Word Vectors. In Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics (AISTATS), Naha, Okinawa, Japan.
|
| 259 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 260 |
+
Nikhil Garg, Londa Schiebinger, Dan Jurafsky, and James Zou. 2018. Word embeddings quantify 100 years of gender and ethnic stereotypes. Proceedings of the National Academy of Sciences, 115(16):E3635-E3644.
|
| 261 |
+
Aina Gari Soler and Marianna Apidianaki. 2020. BERT knows Punta Cana is not just beautiful, it's gorgeous: Ranking scalar adjectives with contextualised representations. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7371-7385, Online. Association for Computational Linguistics.
|
| 262 |
+
Aina Garí Soler and Marianna Apidianaki. 2021a. Let's Play Mono-Poly: BERT Can Reveal Words' Polysemmy Level and Partitionability into Senses. Transactions of the Association for Computational Linguistics, 9:825-844.
|
| 263 |
+
Aina Gari Soler and Marianna Apidianaki. 2021b. Scalar adjective identification and multilingual ranking. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4653-4660, Online. Association for Computational Linguistics.
|
| 264 |
+
Gabriel Grand, Idan Asher Blank, Francisco Pereira, and Evelina Fedorenko. 2022. Semantic projection recovers rich human knowledge of multiple object features from word embeddings. Nature Human Behavior, 6:975-987.
|
| 265 |
+
Peter Gärdenfors. 2014. The Geometry of Meaning: Semantics Based on Conceptual Spaces. The MIT Press, Cambridge, MA.
|
| 266 |
+
|
| 267 |
+
John Hewitt and Percy Liang. 2019. Designing and interpreting probes with control tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2733-2743, Hong Kong, China. Association for Computational Linguistics.
|
| 268 |
+
Ray S. Jackendoff. 1990. Semantic Structures. The MIT Press, Cambridge, MA.
|
| 269 |
+
Shoaib Jameel and Steven Schockaert. 2016. Entity embeddings with conceptual subspaces as a basis for plausible reasoning. In Proceedings of the Twenty-Second European Conference on Artificial Intelligence, ECAI'16, page 1353-1361, NLD. IOS Press.
|
| 270 |
+
Jerrold J. Katz and Jerry A. Fodor. 1964. The structure of a semantic theory. In Jerry A. Fodor and Jerrold J. Katz, editors, The Structure of Language. Prentice-Hall, Englewood Cliffs, NJ.
|
| 271 |
+
Adam Kilgarriff. 2004. How Dominant Is the Commonest Sense of a Word? Lecture Notes in Computer Science (vol. 3206), Text, Speech and Dialogue, Sojka Petr, Kopeček Ivan, Pala Karel (eds.), pages 103-112. Springer, Berlin, Heidelberg.
|
| 272 |
+
Austin C. Kozlowski, Matt Taddy, and James A. Evans. 2019. The geometry of culture: Analyzing the meanings of class through word embeddings. American Sociological Review, 84(5):905-949.
|
| 273 |
+
Haewoon Kwak, Jisun An, Elise Jing, and Yong-Yeol Ahn. 2021. Frameaxis: characterizing microframe bias and intensity with word embedding. PeerJ Computer Science, 7.
|
| 274 |
+
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint:1907.11692.
|
| 275 |
+
Li Lucy, Divya Tadimeti, and David Bamman. 2022. Discovering differences in the representation of people using contextualized semantic axes. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 3477-3494, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
|
| 276 |
+
Qing Lyu, Marianna Apidianaki, and Chris Callison-Burch. 2023. Representation of lexical stylistic features in language models' embedding space. In Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM), Toronto, Canada.
|
| 277 |
+
Qing Lyu, Marianna Apidianaki, and Chris Callison-Burch. 2024. Towards Faithful Model Explanation in NLP: A Survey. arXiv preprint:2209.11326.
|
| 278 |
+
Diana McCarthy, Rob Koeling, Julie Weeds, and John Carroll. 2004. Finding predominant word senses in untagged text. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-04), pages 279–286, Barcelona, Spain.
|
| 279 |
+
|
| 280 |
+
Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. CoRR, abs/1301.3781.
|
| 281 |
+
Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013b. Linguistic regularities in continuous space word representations. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 746-751, Atlanta, Georgia. Association for Computational Linguistics.
|
| 282 |
+
Gregory L. Murphy. 2002. The Big Book of Concepts. MIT Press, Boston, Mass.
|
| 283 |
+
Ellie Pavlick and Ani Nenkova. 2015. Inducing lexical style properties for paraphrase and genre differentiation. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 218-224, Denver, Colorado. Association for Computational Linguistics.
|
| 284 |
+
Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics.
|
| 285 |
+
Barbara Plank, Dirk Hovy, and Anders Søgaard. 2014. Linguistically debatable or just plain wrong? In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 507-511, Baltimore, Maryland. Association for Computational Linguistics.
|
| 286 |
+
Abhilasha Ravichander, Yonatan Belinkov, and Eduard Hovy. 2021. Probing the probing paradigm: Does probing accuracy entail task relevance? In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 3363-3377, Online. Association for Computational Linguistics.
|
| 287 |
+
Dustin S. Stoltz and Marshall A. Taylor. 2021. Cultural cartography with word embeddings. Poetics, 88:101567. Measure Mohr Culture.
|
| 288 |
+
Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R Thomas McCoy, Najoung Kim, Benjamin Van Durme, Sam Bowman, Dipanjan Das, and Ellie Pavlick. 2019. What do you learn from context? probing for sentence structure in contextualized word representations. In International Conference on Learning Representations.
|
| 289 |
+
Ivan Vulic, Edoardo Maria Ponti, Robert Litschko, Goran Glavaš, and Anna Korhonen. 2020. Probing pretrained language models for lexical semantics. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7222-7240, Online. Association for Computational Linguistics.
|
| 290 |
+
Anna Wierzbicka. 1996. Semantics: Primes and Universals. Oxford University Press, New York.
|
| 291 |
+
|
| 292 |
+
# A Appendix
|
| 293 |
+
|
| 294 |
+
Details on computing. All experiments were conducted on a MacBook Pro laptop using Python 3.8, with huggingface version 4.35.2, torch version 2.0.1, sklearnn version 1.2.1, numpy version 1.22.4 and scipy 1.10.0.
|
| 295 |
+
|
| 296 |
+
Hyperparameter estimation. Development set: 6 conditions sampled at random from the object features dataset: cities-danger, states-political, animals-wetness, cities-intelligence, animals-weight, names-age.
|
| 297 |
+
|
| 298 |
+
As said above, only hyperparameters that made a difference: averaging, always good, and mixing parameter alpha. Chosen values:
|
| 299 |
+
|
| 300 |
+
Best parameters:
|
| 301 |
+
|
| 302 |
+
$\alpha$ for FIT+SD: GLoVE 0.02
|
| 303 |
+
$\alpha$ for FIT+S: GLoVE 0.05
|
| 304 |
+
|
| 305 |
+
Embedding spaces, and sentence selection. The GLoVe embeddings used were trained on Common Crawl (42B tokens, 1.9M vocab, uncased, 300d vectors), downloaded from https://nlp.stanford.edu/projects/glove/.
|
| 306 |
+
|
| 307 |
+
In order to generate embeddings for contextualized instances of words in our datasets using BERT (bert-large-uncased) and RoBERTa (roberta-large) models (Devlin et al., 2019; Liu et al., 2019), we used sentences from the UkWac corpus (Baroni et al., 2009). We collected ten sentences for each word, when available. We filtered out sentences with more than 100 tokens in order to avoid including noisy contexts such as webpage menus crawled from the web. If a word had less than 10 occurrences in UkWac, we used as many sentences as were available. This was the case for 10 words in the Grand et al. dataset (nairobi (6), seoul (4), taipei (5), lahore (2), baghdad (9), peyton (9), tehran (4), johannesburg (4), jaime (5), karachi (7); and for one word (jazeera (6)) in the formality dataset. For hyphenated words in the Grand et al. dataset (e.g., new-york, south-carolina, south-dakota), we collected sentences where they occur without the hyphen.
|
2024/Adjusting Interpretable Dimensions in Embedding Space with Human Judgments/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7f25210ac9ad22c122b2822e4f37e15f301eb7719e800f095ecde45d7e5fd6aa
|
| 3 |
+
size 328193
|
2024/Adjusting Interpretable Dimensions in Embedding Space with Human Judgments/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/Advancing Beyond Identification_ Multi-bit Watermark for Large Language Models/929ac009-c5be-48ee-9603-18d937e312c6_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/Advancing Beyond Identification_ Multi-bit Watermark for Large Language Models/929ac009-c5be-48ee-9603-18d937e312c6_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/Advancing Beyond Identification_ Multi-bit Watermark for Large Language Models/929ac009-c5be-48ee-9603-18d937e312c6_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5147ab95a1d699a23fd280cdb5a97df52e7d1deb9518cd47b35667aa76711e49
|
| 3 |
+
size 789424
|
2024/Advancing Beyond Identification_ Multi-bit Watermark for Large Language Models/full.md
ADDED
|
@@ -0,0 +1,706 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Advancing Beyond Identification: Multi-bit Watermark for Large Language Models via Position Allocation
|
| 2 |
+
|
| 3 |
+
KiYoon Yoo<sup>1</sup> Wonhyuk Ahn<sup>2</sup> Nojun Kwak<sup>1*</sup>
|
| 4 |
+
|
| 5 |
+
$^{1}$ Seoul National University $^{2}$ Webtoon AI
|
| 6 |
+
|
| 7 |
+
{961230,nojunk}@snu.ac.kr whahnize@gmail.com
|
| 8 |
+
|
| 9 |
+
# Abstract
|
| 10 |
+
|
| 11 |
+
We show the viability of tackling misuses of large language models beyond the identification of machine-generated text. While existing zero-bit watermark methods focus on detection only, some malicious misuses demand tracing the adversary user for counteracting them. To address this, we propose Multi-bit Watermark via Position Allocation, embedding traceable multi-bit information during language model generation. Through allocating tokens onto different parts of the messages, we embed longer messages in high corruption settings without added latency. By independently embedding sub-units of messages, the proposed method outperforms the existing works in terms of robustness and latency. Leveraging the benefits of zero-bit watermarking (Kirchenbauer et al., 2023a), our method enables robust extraction of the watermark without any model access, embedding and extraction of long messages $(\geq 32$ -bit) without finetuning, and maintaining text quality, while allowing zero-bit detection all at the same time. Code is released here: https://github.com/bangawayoo.mb-lm-watermarking.
|
| 12 |
+
|
| 13 |
+
# 1 Introduction
|
| 14 |
+
|
| 15 |
+
How can we take a step further from merely identifying machine-generated text to proactively tackling misuses of large language models? The emergence of human-like language models and their easily accessible nature via web interface and APIs have garnered unprecedented attention from the public and academia (Hu, 2023). The ability to follow complex instructions has boosted the productivity of various tasks such as programming, creative writing, and more. However, there have been increasing concerns about exploiting such language models to automate malicious activities such as spreading disinformation. This has necessitated the
|
| 16 |
+
|
| 17 |
+
development of various methods to detect machine-generated texts through techniques such as zero-shot detection, supervised training, watermarking, and more (Mitchell et al., 2023; Wang et al., 2023b; Kirchenbauer et al., 2023a; Krishna et al., 2023). These endeavors focus on the crucial task of identifying machine-generated content, which serves as a pivotal step in mitigating the potential harm caused by such text.
|
| 18 |
+
|
| 19 |
+
However, when it comes to more pernicious misuses of large language models, such as the dissemination of misinformation and war propaganda on social media platforms for political or financial gains (Badawy et al., 2018; Pierri et al., 2023; Annie, 2023), the stakes are considerably higher, potentially leading to the erosion of social trust (Valenzuela et al., 2022). In such circumstances, merely identifying the machine-generated text may not suffice for the language model providers. Instead, the ability to trace back to the adversary user responsible for generating the content becomes pivotal in counteracting such misuses. By doing so, the API providers can take a precursory measure to ban these users from their systems and allow media and social platforms, along with API providers, to collaborate with law enforcement authorities and take more decisive actions. All in all, watermarking the user information (or part thereof) can hold the adversary user accountable for potential harms facilitated through language model APIs without having to store user queries (Krishna et al., 2023), which would be prohibitively expensive and concern ordinary users who value privacy.
|
| 20 |
+
|
| 21 |
+
All this can be achieved by embedding multi-bit information. Recent works (Fernandez et al., 2023b; Wang et al., 2023a) have achieved this by providing a distinct signal for each multi-bit message. While this is effective in low bit-width and low noise settings, maintaining the integrity of the watermark becomes increasingly difficult as the bit
|
| 22 |
+
|
| 23 |
+

|
| 24 |
+
Figure 1: Comparison of how messages are encoded for zero-bit watermarking (Kirchenbauer et al., 2023a), recent multi-bit methods, and our proposed method MPAC. For MPAC, the number inside a token (e.g. $p = 1$ ) denotes the allocated position.
|
| 25 |
+
|
| 26 |
+

|
| 27 |
+
|
| 28 |
+

|
| 29 |
+
|
| 30 |
+
width increases due to the exponential number of possible messages. This is further aggravated in the presence of higher noise. In addition, having to consider all the possible messages also follows with the side effect of increased latency during encoding and/or decoding phase, the former of which degrading the end user experience.
|
| 31 |
+
|
| 32 |
+
As opposed to this, our proposed method Multi-bit watermark via Position Allocation (MPAC) first allocates each token pseudo-randomly onto a subunit of the message to be embedded (Fig. 1). The allocation of tokens onto different parts of the messages allows the embedding of longer messages without added generation latency and fares well in high corruption settings. Then the message content at the allocated position determines the state to encode using a zero-bit watermarking scheme. For instance, when following the zero-bit watermarking scheme of Kirchenbauer et al. (2023a), the message content decides which token subsets are biased. To increase load capacity, we can further partition the vocabulary into multiple "colored" lists instead of a single green list, effectively encoding multiple states for every token. Our experiments show our method improves upon the runner-up baseline in terms of robustness of the watermark $\geq 20\%$ in high-noise setting for 16-bit and 24-bit messages.
|
| 33 |
+
|
| 34 |
+
# 2 Related Works
|
| 35 |
+
|
| 36 |
+
Watermarking has been studied in various types of multimedia such as image (Potdar et al., 2005), video (Asikuzzaman and Pickering, 2017), audio (Hua et al., 2016), and natural language (Topkara et al., 2005). Following previous works (Zhu et al., 2018; Luo et al., 2020), we use the term watermarking to denote embedding information into natural language in a manner that is robust against possible attacks given a watermarked text – in our case, this is the output generated by a language model given the prompt. This differs from steganography (Cheddad et al., 2010; Fang et al.,
|
| 37 |
+
|
| 38 |
+
2017; Ziegler et al., 2019; de Witt et al., 2023), which focuses more on the undetectability of a secret message that is embedded in the multimedia rather than robustness. For instance, Ziegler et al. (2019) sequentially encodes information via arithmetic coding every token. Naively applying this deterministic encoding scheme makes the watermark extremely fragile to simple corruptions as shown in Appendix Fig. 7.
|
| 39 |
+
|
| 40 |
+
Recently, methods relying on neural networks have shown progress in natural language watermarking, outperforming traditional methods that rely on rule-based watermarks (Topkara et al., 2006b,a; Atallah et al., 2001). Abdelnabi and Fritz (2021) proposed an end-to-end framework where a decoder network predicts the encoded message. Yang et al. (2022) improved upon the quality of the watermarked text by using an algorithmic approach. Building upon this, Yoo et al. (2023) focused on robustness and capacity, outperforming previous works on both aspects. However, since the proposed method works at the sentence-level, any addition or removal of a sentence will fail to extract the watermark. Moreover, these works cannot distinguish non-watermarked texts, making them unsuitable for distinguishing between machine text and human text.
|
| 41 |
+
|
| 42 |
+
Meanwhile, directly watermarking language models in a zero-bit manner during token generation has emerged as a promising approach for distinguishing language model outputs from human text (Kirchenbauer et al., 2023a; Aaronson and Kirchner, 2023) while achieving robustness against realistic attacks (Kirchenbauer et al., 2023b). Several works have improved upon Kirchenbauer et al. (2023a), e.g., in low entropy generation tasks such as code generation (Lee et al., 2023), undetectability of the watermark (Christ et al., 2023), and its robustness (Munyer and Zhong, 2023). We focus on extending the prior work for a more proactive counteraction towards identifying malicious users
|
| 43 |
+
|
| 44 |
+
of language models by embedding any information while maintaining the key advantages.
|
| 45 |
+
|
| 46 |
+
Concurrent to our work, Fernandez et al. (2023a) and Wang et al. (2023a) use the entire message to create a signal unique to each message. Crucially, both works use the entire message content directly during embedding as input to the random seed generator, which leads to key differences in terms of robustness and latency. We further discuss their methodology in comparison with ours in the next section. Aside from this, Wang et al. (2023a) further utilize a proxy language model to enhance text quality.
|
| 47 |
+
|
| 48 |
+
To give a rough estimate of the required message length for encoding a user ID, consider the POSIX (Group, 2018) standard used when creating usernames in operating systems. 65 characters ( $\sim$ 7 bits) are permitted by POSIX, meaning at least 35 bits are required to encode a username of 5 characters. Accordingly, works in image watermarking embeds messages easily over 32-bits (Zhu et al., 2018; Zhao et al., 2023; Fernandez et al., 2023b). Our method makes this feasible by encoding each bit position independently.
|
| 49 |
+
|
| 50 |
+
# 3 Method
|
| 51 |
+
|
| 52 |
+
We outline the multi-bit watermark protocol:
|
| 53 |
+
|
| 54 |
+
1. A user sends a prompt $X$ to the language model provider.
|
| 55 |
+
2. Using the message encoding function $\mathcal{E}$ , the language model provider generates watermarked text $Y$ embedded with a multi-bit information. The message contains user-specific meta-data that can aid tracing back to the user (e.g. timestamp, location, ID).
|
| 56 |
+
3. The user publishes the text $\tilde{Y}$ , which may be edited from the original watermarked text.
|
| 57 |
+
4. If the published text is deemed unsafe or malicious, the detector inspects $\tilde{Y}$ (i) to determine whether the watermark is present (zero-bit detection) and (ii) decode the multi-bit message to take further measure.
|
| 58 |
+
|
| 59 |
+
# 3.1 Zero-bit Watermarking
|
| 60 |
+
|
| 61 |
+
Throughout the paper, we focus on applying our multi-bit framework using the zero-bit zero-bit watermarking scheme introduced in Kirchenbauer et al. (2023a).* As a preliminary, we briefly review
|
| 62 |
+
|
| 63 |
+
the scheme. An auto-regressive language model $p(y|x)$ predicts the probability distribution over the next token $\Delta(\mathcal{V})$ given arbitrary length prefix tokens where $\mathcal{V}$ is the vocabulary. A zero-bit watermark is embedded by biasing the language model to output a certain subset of tokens. That is, the message encoding function $\mathcal{E}: \Delta(\mathcal{V}) \to \Delta(\mathcal{V})$ generates another probability distribution that alters the original distribution of $p(y|x)$ .
|
| 64 |
+
|
| 65 |
+
For Kirchenbauer et al. (2023a), the message encoding function pseudo-randomly chooses a subset of tokens at each token step $t$ to form a green list $\mathcal{G}_t$ . The logit scores $l_t \in \mathbb{R}^{|\mathcal{V}|}$ are modified towards selecting the green-listed tokens in favor of the other tokens by adding a bias term $\delta$ to the logits in $\mathcal{G}_t$ . Instead of fixing the greenlist using rule-based heuristics such as spelling or synonym variations (He et al., 2022), the greenlist is selected pseudo-randomly at each time step to minimize a noticeable shift in text distributions. At each time step, a seed $s$ is outputted depending on the previous $h$ tokens using a pseudo-random function $f: \mathbb{N}^h \to \mathbb{N}$ , and $s$ is used to sample $\mathcal{G}_t$ from $\mathcal{V}$ .
|
| 66 |
+
|
| 67 |
+
We dub this message encoding function as Greenlist. Given $t - 1$ prefix tokens $X_{1:t-1}$ , and pseudo-random function $f$ , the $t^{\text{th}}$ token is generated by
|
| 68 |
+
|
| 69 |
+
# Greenlist
|
| 70 |
+
|
| 71 |
+
1. Compute hash of tokens $s = f(X_{t - h:t - 1})$ .
|
| 72 |
+
2. Permute vocabulary $\mathcal{V}_t$ using $s$ as seed for a random number generator (RNG).
|
| 73 |
+
3. Let $\mathcal{G}_t$ be the first $\gamma |\mathcal{V}|$ tokens from $\nu_{t}$
|
| 74 |
+
4. Add $\delta$ to token logits in $\mathcal{G}_t$ .
|
| 75 |
+
|
| 76 |
+
Decoding. To determine the presence of the watermark, the detector inspects the ratio of the greenlisted token using the same peso-random function $f$ . A watermarked text will ideally have a high ratio of green tokens. Without the knowledge of the greenlist (null hypothesis), the number of tokens in the greenlist ( $g$ ) follows a binomial distribution. (Kirchenbauer et al., 2023a) used the normal approximation to the binomial distribution to compute the $z$ -statistics for a text with $T$ tokens:
|
| 77 |
+
|
| 78 |
+
$$
|
| 79 |
+
z = \frac {\overline {{g - \gamma T}}}{\sqrt {\gamma (1 - \gamma) T}}.
|
| 80 |
+
$$
|
| 81 |
+
|
| 82 |
+
# 3.2 MPAC: Extending to Multi-bit Watermark
|
| 83 |
+
|
| 84 |
+
The objective of multi-bit watermarking is to embed and extract a message $\mathbf{m} \in \Sigma^b$ where $\Sigma$ de
|
| 85 |
+
|
| 86 |
+
notes the $r$ -ary possible strings, or more commonly referred to as the alphabet. For a binary message, $\Sigma = \{0,1\}$ . We let $p \in \{0,\dots,b - 1\}$ denote the position of the message and $\mathbf{m}[p] \in \{0,\dots,r - 1\}$ the message content at position $p$ . Hereafter, we use $[a]$ to denote the integer set $\{0,\dots,a - 1\}$ .
|
| 87 |
+
|
| 88 |
+
Our proposed method Multi-bit watermarking via Position Allocation (MPAC) works by allocating the tokens to message positions. First, notice that zero-bit watermarking can be viewed as watermarking a single bit of information stating the existence of a watermark. In essence, each token generated by the language model is a signal that reinforces the watermark.
|
| 89 |
+
|
| 90 |
+
Our message encoding function $\mathcal{E}:\Sigma^b\times$ $\Delta (\mathcal{V})\to \Delta (\mathcal{V})$ alters the probability distribution dependent on the message. We first assign a position $p$ using a random number generator seeded with $s$ . Then the message content $m = \mathbf{m}[p]\in [r]$ is encoded by permuting $\nu$ and favoring the $m^{\mathrm{th}}$ subset. Our message encoding function is extremely easy to implement over the Greenlist scheme. We highlight the steps in colors that are specific to ours. An illustrative flow diagram is shown in Fig. 2.
|
| 91 |
+
|
| 92 |
+
# MPAC
|
| 93 |
+
|
| 94 |
+
1. Compute $s = f(X_{t - h:t - 1})$ .
|
| 95 |
+
2. $p\gets$ sample([b]) using $s$ as seed.
|
| 96 |
+
3. $m\gets \mathbf{m}[p]$
|
| 97 |
+
4. Permute vocabulary $\mathcal{V}_t$ using $s$ as seed.
|
| 98 |
+
5. Partition $\mathcal{V}_t = [\mathcal{C}_t^0,\dots ,\mathcal{C}_t^{r - 1}]$ discarding remainders if any.
|
| 99 |
+
6. Add $\delta$ to token logits in $\mathcal{C}_t^m$ .
|
| 100 |
+
|
| 101 |
+
Colorlisting In Step 5, $r$ is the number of available partitions. The number of vocabulary partitions is determined by the greenlist proportion $\gamma$ , i.e. $r = \left\lfloor \frac{1}{\gamma} \right\rfloor$ . When $r > 2$ , we can further increase the load capacity by taking advantage of all the 'colored' lists (hence, the notation $\mathcal{C}$ ), instead of only using the greenlist. Given a binary message of length $b$ , the message is converted to radix $r$ attaining $\mathbf{m}_r \in [r]^{\tilde{b}}$ where $\tilde{b} = \lceil \frac{b}{\log_2 r} \rceil$ . In Fig. 2, we illustrate the case of $r = 4$ and $b = 8$ , where the 8-bit message is converted into radix 4, resulting in an effective message length of $\tilde{b} = 4$ . When $\tilde{b} \neq b$ , we sample the position from $\tilde{b}$ .
|
| 102 |
+
|
| 103 |
+
At each token generation, the message content at the assigned position $p$ determines which colorlist to add $\delta$ to. If the message content is '0', the tokens
|
| 104 |
+
|
| 105 |
+

|
| 106 |
+
|
| 107 |
+

|
| 108 |
+
|
| 109 |
+
1. Compute seed using previous context
|
| 110 |
+
|
| 111 |
+
$$
|
| 112 |
+
\mathbf {s e e d} = f (\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad
|
| 113 |
+
$$
|
| 114 |
+
|
| 115 |
+
2. Sample position
|
| 116 |
+
|
| 117 |
+
$$
|
| 118 |
+
p := \text {s a m p l e} ([ b ], \text {s e e d}) = 1
|
| 119 |
+
$$
|
| 120 |
+
|
| 121 |
+
3. Get message content
|
| 122 |
+
|
| 123 |
+
$$
|
| 124 |
+
m := \mathbf {m} [ 1 ] = 0
|
| 125 |
+
$$
|
| 126 |
+
|
| 127 |
+
4. Permute vocabulary using seed
|
| 128 |
+
5. Partition vocabulary into colorlists
|
| 129 |
+
6. Add bias to the selected colorlist
|
| 130 |
+
|
| 131 |
+

|
| 132 |
+
Figure 2: An overview of our method MPAC. See §3.2 for details.
|
| 133 |
+
|
| 134 |
+

|
| 135 |
+
|
| 136 |
+
7. Generate token and repeat
|
| 137 |
+
|
| 138 |
+
from the first list (red in Fig. 2) are favored. Note that zero-bit watermarking can be seen as a special case of embedding the same single bit message $(b = 1, \mathbf{m} = 0)$ .
|
| 139 |
+
|
| 140 |
+
Message Decoding Given a watermarked language model output, we determine the position and which colorlist each token is from and increment the number of tokens in the colored lists. For instance, for the $t^{\mathrm{th}}$ token with message position $p = i$ and the $j^{\mathrm{th}}$ colorlist $\mathcal{C}_t^j$ , we increment the counter $\mathbf{W}[i][j]$ where $\mathbf{W} \in \mathbb{R}^{\tilde{b} \times r}$ . After computing this on the entire text segment, we predict the message content by taking the colorlist with the most tokens for each position. A Pythonic algorithm is shown in Algorithm 1.
|
| 141 |
+
|
| 142 |
+
# 3.3 Detecting Machine Text
|
| 143 |
+
|
| 144 |
+
While we can use MPAC to decode the multi-bit watermark in conjunction with another detection mechanism, MPAC alone can detect human text from watermarked text just like zero-bit watermarking. To distinguish between a watermarked text and a non-watermarked (human-written) text, we count the number of tokens assigned to the predicted message. This corresponds to $w$ in Line 12 of Algorithm 1. We model the number of tokens in the argmax colorlist of position $i$ as a random variable $C_i \stackrel{H_0}{\sim} \text{Binomial}(T_i, \gamma)$ where $T_i$ is the number of tokens assigned to position $i$ . As $C_0, \ldots, C_{b-1}$ are independent for a fixed set of trials $(T_i, \ldots, T_{b-1})$
|
| 145 |
+
|
| 146 |
+
Algorithm 1: Message Decoding
|
| 147 |
+
Cyclic-Shift
|
| 148 |
+
Message-Hash
|
| 149 |
+
Input: Text $X_{1:T}$ , context width $h$ , effective message length $\tilde{b}$ , counter $\mathbf{W} \in \mathbb{R}^{\tilde{b} \times r}$
|
| 150 |
+
Output: Predicted message $\hat{\mathbf{m}}$ , number of colorlisted tokens $w$
|
| 151 |
+
/* Initialize counter */ 12 end
|
| 152 |
+
1 W[p][m] = 0 $\forall p, m$ /* Predict message */
|
| 153 |
+
/* Count tokens in colorlists */ 13 $\hat{\mathbf{m}}_r = "$ "
|
| 154 |
+
2 for $t$ in $[h + 1, T]$ do
|
| 155 |
+
3 s = f(Xt-h:t-1)
|
| 156 |
+
4 p = sample([\tilde{b}]) using s as seed
|
| 157 |
+
5 $\mathcal{V}_t = \text{permute}(\mathcal{V}_t)$ using s as seed
|
| 158 |
+
6 for $m$ in $[r]$ do
|
| 159 |
+
7 if $X_t \in \mathcal{G}_t^m$ then
|
| 160 |
+
8 W[p][m] += 1
|
| 161 |
+
9 continue
|
| 162 |
+
10 end
|
| 163 |
+
11 end
|
| 164 |
+
|
| 165 |
+
and have the same success probability parameter, the sum of these is a binomial random variable as well:
|
| 166 |
+
|
| 167 |
+
$$
|
| 168 |
+
C = C _ {0} + \dots + C _ {b - 1} \stackrel {H _ {0}} {\sim} \operatorname {B i n o m i a l} (T, \gamma) \quad (1)
|
| 169 |
+
$$
|
| 170 |
+
|
| 171 |
+
where $T = T_{0} + \dots + T_{b - 1}$ . This reduces to the same random variable used in zero-bit watermarking and we can compute the z-statistics. More discussions regarding the details of the z-statistic and other possible statistics are outlined in Appendix A.2.
|
| 172 |
+
|
| 173 |
+
# 3.4 Comparison to Other Works
|
| 174 |
+
|
| 175 |
+
The message encoding function of existing works use the entire message $\mathbf{m}$ . After permuting $\mathcal{V}_t$ , Fernandez et al. (2023a) cyclically shift the vocabulary $m_{10}$ times where $m_{10}$ is the radix-10 form of $\mathbf{m}$ . This modifies Step 2 of Greenlist. Wang et al. (2023a) hashes $\mathbf{m}$ to attain a seed $s'$ to permute the vocabulary along with the seed attained from prefix tokens, modifying Step 1.
|
| 176 |
+
|
| 177 |
+
2'. Permute $\mathcal{V}_t$ using $s$ as seed. Then, cyclic shift $m_{10}$ times.
|
| 178 |
+
|
| 179 |
+
1'. $s^{\prime}\gets \mathrm{Hash}(s + \mathrm{Hash}(m_{10}))$
|
| 180 |
+
|
| 181 |
+
Using the entire message leads to two key characteristics that diverge from ours. First, the hamming distance between two messages is not necessarily
|
| 182 |
+
|
| 183 |
+
preserved after applying the encoding function. As an example, consider Message-Hash. Using the final seed $s'$ created from $\mathbf{m} = 0000$ does not guarantee an output from the RNG that is any closer to that of $\mathbf{m} = 0001$ (hamming distance of 1) as it is to $\mathbf{m} = 1111$ (hamming distance of 4). This leads to an all-or-nothing behavior where either the entire message is extracted without error or is a completely random message. In the presence of high corruption, which reflects the real-world case, we show this behavior is not desirable as it lacks enough signal to correctly predict the message.
|
| 184 |
+
|
| 185 |
+
In addition, the exponential number of messages $(\mathcal{O}(2^b))$ should be considered during message decoding to find the optimal message, which renders decoding of long messages $(\geq 32$ -bit) computation-heavy†. For Fernandez et al. 2023a, the bit-width affects the encoding phase due to the cyclic shift operation, which is more problematic as it affects the end users. MPAC encodes and decodes each bit position of the message independently, which brings a negligible increase in the computation as the message length is increased.
|
| 186 |
+
|
| 187 |
+
The simplicity of our multi-bit watermark scheme via position allocation makes it easy to apply it on top of other methods. For example, using the position allocation scheme, we can decompose the multi-bit message into blocks and hierarchically embed them using the message encoding scheme of Fernandez et al. (2023a). Details are in Appendix A.3. In addition, the message encoding function of MPAC can be generalized to
|
| 188 |
+
|
| 189 |
+
other zero-bit watermark approaches that uses the exponential minimum sampling approach (Aaronson and Kirchner, 2023; Kuditipudi et al., 2023). The scheme is provided in Appendix A.4.
|
| 190 |
+
|
| 191 |
+
# 3.5 Techniques for Practical Use
|
| 192 |
+
|
| 193 |
+
List decoding is a well-established field in coding theory that decodes a list of messages that are within a certain hamming distance (Elias, 1991; Guruswami and Rudra, 2008; Guruswami, 2004). Inspired by this, we alter our decoding function to output candidate messages sorted by the level of confidence. In practice, list decoding is especially useful because provenance tracing via watermarking is far from finding an exact solution, but narrowing down the possible leakage points for a more detailed inspection that may be costly. For instance, when watermarking the timestamp of activity, it is useful to have a likely set of timestamps for which the practitioners to manually inspect, rather than a single candidate.
|
| 194 |
+
|
| 195 |
+
Denoting the predicted message for position $i$ by $\hat{m}$ , and the observed number of tokens in the colored list (strength of the watermark) by $w = \mathbf{W}_i[\hat{m}]$ , the confidence of $\hat{m}$ should be higher if $w$ deviates from the expected mean under the null hypothesis that all colored lists are equally likely to be sampled. We define confidence at position $i$ as $c_i \propto \operatorname*{Pr}(W_i^{\max} \leq w|H_0)$ where $W_i^{\max}$ is the maximum cell value of $W_i \stackrel{H_0}{\sim}$ Multinomial $(T_i, [\gamma \cdots \gamma])$ where $T_i$ is the number of tokens assigned to position $i$ . The distribution of $W_i^{\max}$ is approximated using techniques from Levin (1981) (See Appendix A.2.2).
|
| 196 |
+
|
| 197 |
+
Our algorithm can be parameterized by the confidence bound on each position:
|
| 198 |
+
|
| 199 |
+
- Input: Best prediction $\hat{\mathbf{m}}$ found by majority voting via Alg. 1, confidence bound $c_{0}$
|
| 200 |
+
Output: $\hat{\mathbf{m}}_1,\dots ,\hat{\mathbf{m}}_{|\mathbb{L}|}\in \mathbb{L}$ whose predictions are altered on positions with confidence under $c_{0}$
|
| 201 |
+
|
| 202 |
+
Empirically, we determine $c_{0}$ by constraining $|\mathbb{L}|$ . Note that since $\hat{\mathbf{m}}$ is always the most confident message, we comprise $\mathbb{L}$ with the next confident messages. To do this, we greedily alter the positions with the lowest confidence to the colorlist with the second largest number of tokens. Note that this list decoding technique is not unique to ours and can be applied to other methods as long as the decoding stage is computationally feasible.
|
| 203 |
+
|
| 204 |
+
This technique is not unique to ours and can be applied to other methods as long as the decoding stage is computationally feasible. Moreover, applying other techniques from coding theory such as error correction codes (Wicker and Bhargava, 1999) to MPAC is straightforward as we independently encode the position of the message. More examples are in Appendix A.3.
|
| 205 |
+
|
| 206 |
+
# 4 Experiments
|
| 207 |
+
|
| 208 |
+
# 4.1 Experimental Settings
|
| 209 |
+
|
| 210 |
+
For our main experiments, we use LLaMA-2-7B (Touvron et al., 2023) to generate sequences using the newslike subset of the Colossal Common Crawl Cleaned corpus (C4) dataset (Raffel et al., 2020) following previous work (Kirchenbauer et al., 2023a). This simulates the scenario of generating fake news given a certain topic. For watermarking and text generation, we follow the configurations used in Kirchenbauer et al. (2023b) unless otherwise denoted: bias $\delta = 2.0$ , greenlist ratio $\gamma = 0.25$ , which have shown a good trade-off between the detection performance and generation quality. Since $\gamma = 0.25$ , the number of colors $r$ is 4. We embed a random $b$ -bit message onto $>500$ samples and report the mean metrics across samples.
|
| 211 |
+
|
| 212 |
+
When using the term 'bit' or 'bit-width', this denotes the initial message length; the effective message length is determined by $r$ . When necessary, we also show the three standard error ranges. More details are in Appendix A.5.
|
| 213 |
+
|
| 214 |
+
Metrics To measure the performance of multi-bit watermarking, we use bit accuracy following previous works in the watermarking literature (Zhu et al., 2018; Luo et al., 2020; Yang et al., 2022; Yoo et al., 2023) to measure how much of the embedded bits can be extracted without error. For zero-bit watermark performance (i.e. machine-text detection), we use area under the ROC curve (AUROC) and the true positive rate (TPR) at various false positive rate thresholds. Since all the baselines and ours are implemented on top of Greenlist, the impact on the text distribution is equivalent for all the methods when $\delta$ is equal. While MSG-HASH propose to use an auxiliary language model to aid the partitioning in the vocabulary, this is not feasible when the tokenizers of the main language model and the auxiliary one are different. Thus, we use the 'Vanilla Marking' proposed in their paper. We further discuss the validity of the metrics in Ap
|
| 215 |
+
|
| 216 |
+
pendix A.6. To compute the performance of list decoding, we take the closest message out of the candidates.
|
| 217 |
+
|
| 218 |
+
Threat Model In the real world, a user may edit the generated text before publishing to refine the language model output or in an attempt to evade the watermark. We study two types of attacks studied in the past work (Kirchenbauer et al., 2023b): copy-paste mixes the watermarked text and human text and paraphrasing uses another language model to paraphrase the watermarked text. For the copy-paste attack, we randomly interleave the generated watermarked text into a non-watermarked text, mixing a $p$ percentage of non-watermarked texts while maintaining the total length. For paraphrasing, we use GPT-3.5-turbo (the prompt is shown in Table 16). Both attacks do not maintain the start and end tokens of the watermarked text.
|
| 219 |
+
|
| 220 |
+
# 4.2 Results
|
| 221 |
+
|
| 222 |
+
For numerical results, see the tables in Appendix A.10.
|
| 223 |
+
|
| 224 |
+
Comparison with Other Works. We compare MPAC with Fernandez et al. (2023a, Cyclic-Shift) and Wang et al. (2023a, Message-Hash). We do not compare with other steganography and post-processing works as they are extremely fragile in real-world corruption settings. Please refer to Sec. 2 for details. For Cyclic-Shift, the bit-width is bounded by $\log_2|\mathcal{V}|\approx 15$ bits, since the cyclic-shift operation is only unique up to the size of the vocabulary. Due to this, we also experiment with extending Cyclic-Shift to another zero-bit watermark method scheme called exponential minimum sampling (Aaronson and Kirchner, 2023, EMS), which does not have a theoretical upperbound. We call this Cyclic-Shift (EMS).
|
| 225 |
+
|
| 226 |
+
The results in Fig. 3 show the clean and robust multi-bit accuracy in the presence of the copy-paste attack. At 8-bit, all methods achieve nearly $100\%$ accuracy and do fairly well even in the presence of corruption. At higher bit-width, MPAC outperforms others in both clean and robust accuracy. As corruption rate is increased, the other methods show dramatic degradation. In contrast, MPAC can withstand them due to position allocation, which independently encodes each position. In Fig. 3 Right, we compare the watermark detection performance at 8-bit. For Cyclic-Shift and Message-Hash, we use 10,000 negative samples and the TPRs@FPR=1e-5 are linearly interpolated due to the lengthened decoding time. The
|
| 227 |
+
|
| 228 |
+
results demonstrate that MPAC outperforms them at low FPR thresholds. Notably, at $\mathrm{FPR} = 1e - 5$ , our true positive rate is .951.
|
| 229 |
+
|
| 230 |
+
Enlarging the message length comes at the cost of computation for prior works. Increasing the bit-width from 16-bit $\rightarrow$ 24-bit, lengthens the generation time of Cyclic-Shift by roughly $3.6\mathrm{x}$ (14 seconds $\rightarrow$ 50 seconds) per sample, while MPAC does not have increased latency (Fig. 5b). Message-Hash does not suffer from latency overhead during encoding, but the computation and memory overhead increase exponentially during decoding.
|
| 231 |
+
|
| 232 |
+
MPAC can maintain the watermark under various corruptions. The full results of copy-paste attack in Appendix Fig. 12. Even at 32-bit, our watermark is not entirely destroyed as we encode each position of the watermark independently, which shows that it can benefit from error correction codes. We found paraphrasing to be much more challenging than the copy-paste attack when paraphrasing a short segment (T=250) and thus, experimented with only 8-bit messages and increasing the token lengths (Fig. 4). Nonetheless, the trends in robustness in the copy-paste attack remains the same with MPAC performing better than the baselines (Tab. 7). With T=500, the bit accuracy reaches nearly $80\%$ and with 16-list decoding, we are able to attain $90\%$ bit accuracy across all token lengths. More attacks using a different paraphrase model are considered in Appendix A.8.
|
| 233 |
+
|
| 234 |
+
Colorlisting improves multibit performance. Next, we verify the effectiveness of 'colorlisting', which takes advantage of the surplus vocabulary partitions. Fig. 5a demonstrates the gain in the load capacity by using $r = 4$ colorlists as opposed to $r = 2$ given a fixed $\gamma$ . Besides the 8-bit case, which already achieves high accuracy, the performance of $\gamma = 0.25$ , $r = 4$ is statistically significant at $p = 1e - 2$ than the second best variant. We further discuss the implications of varying $\gamma$ , $r$ in Section 5.
|
| 235 |
+
|
| 236 |
+
Next, we increase the number of tokens (T) and bit width accordingly to demonstrate the feasibility of embedding longer messages. While the performance degrades as we increase the bit-width, the watermark does not entirely break, demonstrating the benefits of decomposing the message by positions. Moreover, the degradation can be partially compensated for by using list decoding. For 32-bit, the best possible message in the list achieves $95\%$ bit acc. by verifying only 16 out of $2^{32}$ possible messages. For 64-bit, the absolute perfor
|
| 237 |
+
|
| 238 |
+

|
| 239 |
+
Figure 3: Left: Comparison with prior works without corruption (clean) and in the presence of copy-paste attack with $p\%$ . On 24-bit, only 100 samples were watermarked for Cyclic-Shift and Message-Hash due to lengthened encoding / decoding time. Right: TPR for various FPR thresholds.
|
| 240 |
+
|
| 241 |
+

|
| 242 |
+
|
| 243 |
+

|
| 244 |
+
Figure 4: Corrupted bit accuracy for paraphrasing attack using GPT-3.5 embedding 8-bit messages at varying token lengths. We show multiple sizes of list $(|L|\in \{2, 4, 8, 16\})$ by color gradation as 8-bit has relatively small output space.
|
| 245 |
+
|
| 246 |
+
mance gain is $3.0\%$ by generating merely 16 more candidate messages, which corresponds to roughly $1\mathrm{e}^{-20}$ of the total possible messages. Excluding the 8-bit case, whose $\mathrm{AUC} = .988$ , all the others have $\mathrm{AUC} > .99$ .
|
| 247 |
+
|
| 248 |
+
Detection performance is affected by bit-width. To get a clearer picture of the detection performance, we compute AUC vs. the number of tokens observed in Fig. 6a following Kirchenbauer et al. (2023b). We see that the detection performance decreases as the message bit is increased. This phenomenon is similarly observed in other works as the increase in the number of "hypotheses" required to check leads to an increase in the false positive rate (Fernandez et al., 2023b). We further discuss the reasons behind this in the subsequent section. Note, however, that a watermarked text with 32-bit message reaches AUC over 0.99 once observing 200 tokens ( $\approx$ 150 words). The TPR at FPR=1e-3 for b={0,8,16,24,32} are 0.98, 0.98, 0.95, 0.93, and 0.91, respectively (shown in Table 8).
|
| 249 |
+
|
| 250 |
+
Text quality is not affected by bit-width. MPAC ex
|
| 251 |
+
|
| 252 |
+
tends zero-bit watermarking by allocating tokens to message positions and partitioning vocabularies, which would otherwise be allocated to a single position and a single vocabulary partition. Consequently, given the same $\delta$ and $\gamma$ , it only alters the text distribution to an extent that zero-bit watermarking does regardless of the bit-width. Indeed, our empirical results in Fig. 5b demonstrate that the text quality is statistically indistinguishable across bit-widths. We also show that the encoding latency, which directly experiences user experience, does not increase with bit-width. Three standard error ranges are shown.
|
| 253 |
+
|
| 254 |
+
Across Model Scales, Datasets, Hash Schemes. We further experiment with other pretrained models (Jiang et al., 2023; Zhang et al., 2022) and their finetuned versions in Table 1. The results demonstrate Mistral and OPT also achieve a similar performance, showing that our method is not limited to a specific pretrained model. We also find that the finetuned versions are also capable of watermarking, though the finetuned LLaMA model show a slight drop-off. The results for larger models (13B, 70B) and other datasets are in Appendix A.9. To summarize, we found that text distributions with low entropy inherently have lower load capacity as observed similarly in prior works. However, our results consistently show that multi-bit watermarking is possible for open-form generation – which resembles disinformation generation – across model types and scales. We also present results for using another hash scheme with a longer context width in Appendix Table 13 and 14, which show a similar level of performance.
|
| 255 |
+
|
| 256 |
+
# 5 Discussions
|
| 257 |
+
|
| 258 |
+
Load capacity and detection performance tradeoff. As noted above, embedding longer messages degrades the watermark detection performance due
|
| 259 |
+
|
| 260 |
+

|
| 261 |
+
(a)
|
| 262 |
+
|
| 263 |
+

|
| 264 |
+
(b)
|
| 265 |
+
|
| 266 |
+

|
| 267 |
+
(a)
|
| 268 |
+
|
| 269 |
+

|
| 270 |
+
Figure 5: (a) Clean bit accuracy with 3 standard errors for a fixed number of tokens (left) and fixed BPT (right). (b) Text quality (PPL, P-SP) and encoding latency across bit widths. 3 standard errors are shown.
|
| 271 |
+
(b)
|
| 272 |
+
|
| 273 |
+

|
| 274 |
+
(c)
|
| 275 |
+
Figure 6: (a) AUC@number of tokens observed for $b = \{0,8,16,24,32\}$ . Darker colors denote larger bit-widths. (b) Zero-bit and multi-bit watermark performance for varying $\gamma$ and $r$ for 1000 samples at $T = 100, b = 8$ . (c) Error rate as a function of confidence.
|
| 276 |
+
|
| 277 |
+
<table><tr><td>Model</td><td>Bit Acc.</td></tr><tr><td>LLaMA-2-7b</td><td>.986 (.06)</td></tr><tr><td>+ Chat</td><td>.922 (.13)</td></tr><tr><td>Mistral-7b</td><td>.987 (.06)</td></tr><tr><td>+ Chat</td><td>.977 (.08)</td></tr><tr><td>OPT-1.3b</td><td>.982 (.07)</td></tr></table>
|
| 278 |
+
|
| 279 |
+
Table 1: Performance on other pretrained models and their SFT and RLHF variants (Llama-2-7b-chat-hf and Mistral-7B-Instruct-v0.1). Results on $b = 8$ , $T = 250$ .
|
| 280 |
+
|
| 281 |
+
to overestimating the statistics of non-watermarked human texts (Fig. 8). This is because computing the statistics involved finding the maximum cell value for each position. One natural solution is to use a better statistic that models the maximum cell value of a multinomial distribution. Empirically, we found that this performed on par or even slightly worse compared to the current approach, which may be due to the approximation error when using a small sample size. We give a more detailed discussion on this in Appendix A.2.
|
| 282 |
+
|
| 283 |
+
Radix and Colorlist proportion How do radix and colorlist proportion $\gamma$ influence multi-bit watermark performance? For $\gamma = .125$ , the benefits of enlarging $r$ to 8 are saturated and show no statistical significance to $r = 4$ . While larger $r$ allows more
|
| 284 |
+
|
| 285 |
+
tokens to be assigned to each position by reducing the effective length of the message, it challenges the problem by increasing the number of possible answers (digits) per position. Additionally, we observed that increasing radix trade-offs zero-bit performance for multi-bit performance. The observations are illustrated in Fig. 6b.
|
| 286 |
+
|
| 287 |
+
List Decoding Ablation In Fig. 6c, we show a plot of bit error rate stratified by the confidence. While not properly calibrated (under-estimation), having higher confidence leads to lower error rate. We also highlight the effectiveness of this technique by comparing it with randomly outputting candidate messages from scratch in Table 3. We also observed that randomly altering a single position provides a good list as the best candidate message is already a good starting point.
|
| 288 |
+
|
| 289 |
+
# 6 Conclusion
|
| 290 |
+
|
| 291 |
+
Our findings demonstrate the effectiveness of embedding multi-bit message by allocating tokens to sub-units of the message. We show that this does not lead to quality degradation compared to its zero-bit counterpart nor latency overhead as bit-width is increased. This unveils a novel prospect of counteracting high-stake misuse of large language models via API. We also analyze the trade-off between multi-bit watermark and the detection rate.
|
| 292 |
+
|
| 293 |
+
# Limitations
|
| 294 |
+
|
| 295 |
+
Watermarking is an prospective technology that can contribute to a safer use of large language models through promoting accountability. However, it is not yet a ready-made solution that can be deployed in the wild. One clear aspect that needs further study is quality compared to the non-watermarked generations. Recent empirical results demonstrate that watermarking generally leads to degradation in the quality (Tu et al., 2023). Several works attempt to tackle this by using techniques such as smaller language models to evenly partition the watermark (Wang et al., 2023a) or adaptive watermarks based on entropy threshold (Lee et al., 2023). Since our proposed method MPAC can maintain the quality as its zero-bit counterpart, these improvements can be easily extended to our multi-bit framework as well. Another line of research proposes different sampling strategies such as the exponential minimum sampling (Aaronson and Kirchner, 2023) or non-distortionary functions (Hu et al., 2023). We believe the general framework of decomposing multi-bit message into units is easily generalizable to other watermarking schemes. An example is shown in Appendix appendix A.4.
|
| 296 |
+
|
| 297 |
+
Another limitation of multi-bit watermark is its trade-off between the load capacity and detection rate (zero-bit watermark). While ours outperform the baselines in the zero-bit watermark setting (\$4), overhauling this inherent limitation of multi-bit watermark remains another challenge in deploying multi-bit watermark over its zero-bit counterpart.
|
| 298 |
+
|
| 299 |
+
# Ethics Statement
|
| 300 |
+
|
| 301 |
+
Watermarking can mitigate malicious use cases by being able to trace back to the malicious user. This will enable holding accountability on adversaries for their malfeasance. However, ordinary users may find the idea discomforting as it may give the sense that the API provider can know what outputs are fed to the individual users. This is not the case unless the content is published to the public by the user, which – in many cases – is already done in an environment where the user can be identified (e.g. social media). All in all, the identification of machine-generated texts and tracing their provenance can enhance the accountability of API access of large language models without breaching individual users' privacy.
|
| 302 |
+
|
| 303 |
+
# Acknowledgements
|
| 304 |
+
|
| 305 |
+
This work was supported by IITP grant (2021-0-01343) and NRF grants (2021R1A2C3006659, 2022R1A5A7026673), all funded by MSIT of the Korean Government.
|
| 306 |
+
|
| 307 |
+
# References
|
| 308 |
+
|
| 309 |
+
Scott Aaronson and Hendrik Kirchner. 2023. Watermarking gpt outputs. https://www.scottaaronson.com/talks/watermark.ppt. Accessed: 2023-09-14.
|
| 310 |
+
Sahar Abdelnabi and Mario Fritz. 2021. Adversarial watermarking transformer: Towards tracing text provenance with data hiding. In 2021 IEEE Symposium on Security and Privacy (SP), pages 121-140. IEEE.
|
| 311 |
+
Palmer Annie. 2023. People are using a.i. chatbots to write amazon reviews. *CNBC*.
|
| 312 |
+
Md Asikuzzaman and Mark R Pickering. 2017. An overview of digital video watermarking. IEEE Transactions on Circuits and Systems for Video Technology, 28(9):2131-2153.
|
| 313 |
+
Mikhail J Atallah, Victor Raskin, Michael Crogan, Christian Hempelmann, Florian Kerschbaum, Dina Mohamed, and Sanket Naik. 2001. Natural language watermarking: Design, analysis, and a proof-of-concept implementation. In International Workshop on Information Hiding, pages 185-200. Springer.
|
| 314 |
+
Adam Badawy, Emilio Ferrara, and Kristina Lerman. 2018. Analyzing the digital traces of political manipulation: The 2016 russian interference twitter campaign. In 2018 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), pages 258-265.
|
| 315 |
+
Elwyn R Berlekamp. 1964. *Block coding with noiseless feedback*. Ph.D. thesis, Massachusetts Institute of Technology.
|
| 316 |
+
Abbas Cheddad, Joan Condell, Kevin Curran, and Paul Mc Kevitt. 2010. Digital image steganography: Survey and analysis of current methods. Signal processing, 90(3):727-752.
|
| 317 |
+
Miranda Christ, Sam Gunn, and Or Zamir. 2023. Undetectable watermarks for language models. arXiv preprint arXiv:2306.09194.
|
| 318 |
+
Thomas M Cover. 1999. Elements of information theory. John Wiley & Sons.
|
| 319 |
+
Christian Schroeder de Witt, Samuel Sokota, J Zico Kolter, Jakob Nicolaus Foerster, and Martin Strohmeier. 2023. Perfectly secure steganography using minimum entropy coupling. In *The Eleventh International Conference on Learning Representations*.
|
| 320 |
+
|
| 321 |
+
Peter Elias. 1991. Error-correcting codes for list decoding. IEEE Transactions on Information Theory, 37(1):5-12.
|
| 322 |
+
Tina Fang, Martin Jaggi, and Katerina Argyraki. 2017. Generating steganographic text with lstms. In Proceedings of ACL 2017, Student Research Workshop, pages 100-106.
|
| 323 |
+
Pierre Fernandez, Antoine Chaffin, Karim Tit, Vivien Chappelier, and Teddy Furon. 2023a. Three bricks to consolidate watermarks for large language models. arXiv preprint arXiv:2308.00113.
|
| 324 |
+
Pierre Fernandez, Guillaume Couairon, Hervé Jégou, Matthijs Douze, and Teddy Furon. 2023b. The stable signature: Rooting watermarks in latent diffusion models. arXiv preprint arXiv:2303.15435.
|
| 325 |
+
Philip Gage. 1994. A new algorithm for data compression. C Users Journal, 12(2):23-38.
|
| 326 |
+
The Open Group. 2018. The open group base specifications issue 7, 2018 edition ieee std 1003.1TM-2017 (revision of ieee std 1003.1-2008) copyright © 2001 2018 ieee and the open group. https://pubs.opengroup.org/onlinepubs/9699919799/. Accessed: 2023-09-14.
|
| 327 |
+
Meghal Gupta, Venkatesan Guruswami, and Rachel Yun Zhang. 2023. Binary error-correcting codes with minimal noiseless feedback. In Proceedings of the 55th Annual ACM Symposium on Theory of Computing, pages 1475-1487.
|
| 328 |
+
Venkatesan Guruswami. 2004. List decoding of error-correcting codes: winning thesis of the 2002 ACM doctoral dissertation competition, volume 3282. Springer Science & Business Media.
|
| 329 |
+
Venkatesan Guruswami and Atri Rudra. 2008. Explicit codes achieving list decoding capacity: Error-correction with optimal redundancy. IEEE Transactions on information theory, 54(1):135-150.
|
| 330 |
+
Xuanli He, Qiongkai Xu, Lingjuan Lyu, Fangzhao Wu, and Chenguang Wang. 2022. Protecting intellectual property of language generation apis with lexical watermark. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 10758-10766.
|
| 331 |
+
Krystal Hu. 2023. Chatgpt sets record for fastest-growing user base - analyst note. Reuters.
|
| 332 |
+
Zhengmian Hu, Lichang Chen, Xidong Wu, Yihan Wu, Hongyang Zhang, and Heng Huang. 2023. Unbiased watermark for large language models. arXiv preprint arXiv:2310.10669.
|
| 333 |
+
Guang Hua, Jiwu Huang, Yun Q Shi, Jonathan Goh, and Vrizlynn LL Thing. 2016. Twenty years of digital audio watermarking—a comprehensive review. Signal processing, 128:222-242.
|
| 334 |
+
|
| 335 |
+
Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et al. 2023. Mistral 7b. arXiv preprint arXiv:2310.06825.
|
| 336 |
+
John Kirchenbauer, Jonas Geiping, Yuxin Wen, Jonathan Katz, Ian Miers, and Tom Goldstein. 2023a. A watermark for large language models. arXiv preprint arXiv:2301.10226.
|
| 337 |
+
John Kirchenbauer, Jonas Geiping, Yuxin Wen, Manli Shu, Khalid Saifullah, Kezhi Kong, Kasun Fernando, Aniruddha Saha, Micah Goldblum, and Tom Goldstein. 2023b. On the reliability of watermarks for large language models. arXiv preprint arXiv:2306.04634.
|
| 338 |
+
Kalpesh Krishna, Yixiao Song, Marzena Karpinska, John Wieting, and Mohit Iyyer. 2023. Paraphrasing evades detectors of ai-generated text, but retrieval is an effective defense. arXiv preprint arXiv:2303.13408.
|
| 339 |
+
Rohith Kuditipudi, John Thickstun, Tatsunori Hashimoto, and Percy Liang. 2023. Robust distortion-free watermarks for language models. arXiv preprint arXiv:2307.15593.
|
| 340 |
+
Taehyun Lee, Seokhee Hong, Jaewoo Ahn, Ilgee Hong, Hwaran Lee, Sangdoo Yun, Jamin Shin, and Gunhee Kim. 2023. Who wrote this code? watermarking for code generation. arXiv preprint arXiv:2305.15060.
|
| 341 |
+
Bruce Levin. 1981. A representation for multinomial cumulative distribution functions. The Annals of Statistics, pages 1123-1126.
|
| 342 |
+
Xiyang Luo, Ruohan Zhan, Huiwen Chang, Feng Yang, and Peyman Milanfar. 2020. Distortion agnostic deep watermarking. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13548-13557.
|
| 343 |
+
Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2016. Pointer sentinel mixture models.
|
| 344 |
+
Eric Mitchell, Yoonho Lee, Alexander Khazatsky, Christopher D Manning, and Chelsea Finn. 2023. Detectgpt: Zero-shot machine-generated text detection using probability curvature. arXiv preprint arXiv:2301.11305.
|
| 345 |
+
Travis Munyer and Xin Zhong. 2023. Deeptextmark: Deep learning based text watermarking for detection of large language model generated text. arXiv preprint arXiv:2305.05773.
|
| 346 |
+
Francesco Pierri, Luca Luceri, Nikhil Jindal, and Emilio Ferrara. 2023. Propaganda and misinformation on facebook and twitter during the russian invasion of ukraine. In Proceedings of the 15th ACM Web Science Conference 2023, pages 65-74.
|
| 347 |
+
|
| 348 |
+
Vidyasagar M Potdar, Song Han, and Elizabeth Chang. 2005. A survey of digital image watermarking techniques. In INDIN'05. 2005 3rd IEEE International Conference on Industrial Informatics, 2005., pages 709-716. IEEE.
|
| 349 |
+
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 21(1):5485-5551.
|
| 350 |
+
Christoph Schuhmann. 2022. Hugging-face datasets: Christophschuhmann/essayswith-instructions. https://huggingface.co/datasets/ChristophSchuhmann/ essays-with-instructions. Accessed: 2023-09-14.
|
| 351 |
+
Mercan Topkara, Giuseppe Riccardi, Dilek Hakkani-Tur, and Mikhail J Atallah. 2006a. Natural language watermarking: Challenges in building a practical system. In Security, Steganography, and Watermarking of Multimedia Contents VIII, volume 6072, pages 106-117. SPIE.
|
| 352 |
+
Mercan Topkara, Cuneyt M Taskiran, and Edward J Delp III. 2005. Natural language watermarking. In Security, Steganography, and Watermarking of Multimedia Contents VII, volume 5681, pages 441-452. SPIE.
|
| 353 |
+
Umut Topkara, Mercan Topkara, and Mikhail J Atallah. 2006b. The hiding virtues of ambiguity: quantifiably resilient watermarking of natural language text through synonym substitutions. In Proceedings of the 8th workshop on Multimedia and security, pages 164-174.
|
| 354 |
+
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288.
|
| 355 |
+
Shangqing Tu, Yuliang Sun, Yushi Bai, Jifan Yu, Lei Hou, and Juanzi Li. 2023. Waterbench: Towards holistic evaluation of watermarks for large language models. arXiv preprint arXiv:2311.07138.
|
| 356 |
+
Sebastián Valenzuela, Daniel Halpern, and Felipe Araneda. 2022. A downward spiral? a panel study of misinformation and media trust in chile. The International Journal of Press/Politics, 27(2):353-373.
|
| 357 |
+
Lean Wang, Wenkai Yang, Deli Chen, Hao Zhou, Yankai Lin, Fandong Meng, Jie Zhou, and Xu Sun. 2023a. Towards codable text watermarking for large language models. arXiv preprint arXiv:2307.15992.
|
| 358 |
+
Yuxia Wang, Jonibek Mansurov, Petar Ivanov, Jinyan Su, Artem Shelmanov, Akim Tsvigun, Chenxi
|
| 359 |
+
|
| 360 |
+
Whitehouse, Osama Mohammed Afzal, Tarek Mahmoud, Alham Fikri Aji, et al. 2023b. M4: Multi-generator, multi-domain, and multi-lingual black-box machine-generated text detection. arXiv preprint arXiv:2305.14902.
|
| 361 |
+
Stephen B Wicker and Vijay K Bhargava. 1999. Reed-Solomon codes and their applications. John Wiley & Sons.
|
| 362 |
+
John Wieting, Kevin Gimpel, Graham Neubig, and Taylor Berg-Kirkpatrick. 2022. Paraphrastic representations at scale. In Proceedings of the The 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 379-388.
|
| 363 |
+
Xi Yang, Jie Zhang, Kejiang Chen, Weiming Zhang, Zehua Ma, Feng Wang, and Nenghai Yu. 2022. Tracing text provenance via context-aware lexical substitution. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 11613-11621.
|
| 364 |
+
KiYoon Yoo, Wonhyuk Ahn, Jiho Jang, and Nojun Kwak. 2023. Robust multi-bit natural language watermarking through invariant features. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2092-2115.
|
| 365 |
+
Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. 2022. Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068.
|
| 366 |
+
Yunqing Zhao, Tianyu Pang, Chao Du, Xiao Yang, Ngai-Man Cheung, and Min Lin. 2023. A recipe for watermarking diffusion models. arXiv preprint arXiv:2303.10137.
|
| 367 |
+
Jiren Zhu, Russell Kaplan, Justin Johnson, and Li Fei-Fei. 2018. Hidden: Hiding data with deep networks. In Proceedings of the European conference on computer vision (ECCV), pages 657-672.
|
| 368 |
+
Zachary Ziegler, Yuntian Deng, and Alexander M Rush. 2019. Neural linguistic steganography. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1210-1215.
|
| 369 |
+
|
| 370 |
+
# A Appendix
|
| 371 |
+
|
| 372 |
+
# Table of Contents
|
| 373 |
+
|
| 374 |
+
1. Decoding Algorithm
|
| 375 |
+
2. Analysis on Watermark Detection
|
| 376 |
+
3. List Decoding and Other Techniques
|
| 377 |
+
4. Extending MPAC to other methods
|
| 378 |
+
5. Implementation, Hardware, Code Details
|
| 379 |
+
6. Metrics: Bit Accuracy, Text Quality
|
| 380 |
+
7. Discussion on the Hashing Scheme
|
| 381 |
+
8. More on Robustness: Other Attacks, Detection
|
| 382 |
+
9. Ablations on Datasets and Model Sizes
|
| 383 |
+
10. Tabular Results
|
| 384 |
+
11. Generation Samples
|
| 385 |
+
|
| 386 |
+
# A.1 Decoding Algorithm
|
| 387 |
+
|
| 388 |
+
Algorithm 2: Message Decoding
|
| 389 |
+
Input: Watermarked text $X_{1:T}$ , hash context width $h$ , effective message length $\tilde{b}$
|
| 390 |
+
Output: Predicted message $\hat{\mathbf{m}}$ , number of colorlisted tokens $w$
|
| 391 |
+
/* Initialize counter */
|
| 392 |
+
$\mathbf{W}_p[m] = 0 \forall p, m$
|
| 393 |
+
/* Count tokens in colored lists */
|
| 394 |
+
for $t$ in $[h + 1,T]$ do
|
| 395 |
+
$s = f(X_{t - h:t - 1})$ $p = \text{sample}([\tilde{b}])$
|
| 396 |
+
for $m$ in $[r]$ do
|
| 397 |
+
Permute $\mathcal{V}_t$ using $s$ as seed
|
| 398 |
+
if $X_t \in \mathcal{G}_t^m$ then
|
| 399 |
+
$\mathbf{W}_p[m] += 1$
|
| 400 |
+
end
|
| 401 |
+
end
|
| 402 |
+
end
|
| 403 |
+
/\* Predict message */
|
| 404 |
+
$\hat{\mathbf{m}}_r = "$ "
|
| 405 |
+
$w = 0$
|
| 406 |
+
for $p$ in $[\tilde{b}]$ do
|
| 407 |
+
$w += \max(\mathbf{W}_p[m])$ $\hat{m} = \operatorname{argmax}_m(\mathbf{W}_p[m])$ $\hat{\mathbf{m}}_r += \text{str}(\hat{m})$
|
| 408 |
+
end
|
| 409 |
+
Get bit message $\hat{\mathbf{m}}$ by converting $\hat{\mathbf{m}}_r$
|
| 410 |
+
return $\hat{\mathbf{m}}, w$
|
| 411 |
+
|
| 412 |
+
# A.2 Analysis on Watermark Detection
|
| 413 |
+
|
| 414 |
+
# A.2.1 Watermark Detection
|
| 415 |
+
|
| 416 |
+
The presence of a watermark is determined by counting the number of tokens in the greenlist. For a human-generated text that has no knowledge of the greenlist rule, a token will be from the greenlist with the probability $\gamma \leq 0.5$ , the proportion of the greenlist size compared to the entire vocabulary. Without the knowledge of the greenlist (null hypothesis), the number of tokens in the greenlist $(g)$ follows a binomial distribution. (Kirchenbauer et al., 2023a) used the normal approximation to the binomial distribution to compute the $z$ -statistics for a text with $T$ tokens: $z = \frac{g - \gamma T}{\sqrt{\gamma(1 - \gamma)T}}$ .
|
| 417 |
+
|
| 418 |
+
Next, we further analyze how bit-width of the message and radix affect detection performance. Our analysis stems from the observation that as
|
| 419 |
+
|
| 420 |
+

|
| 421 |
+
Figure 7: Performance difference between watermark extraction with and without corruption. "Deterministic" denotes sequentially encoding each position of the message as done in Ziegler et al. (2019) in the Greenlist framework. Mixing $20\%$ of non-watermarked text makes the bit accuracy of sequential encoding scheme nearly random.
|
| 422 |
+
|
| 423 |
+
we increase the bit-width the detection score for the non-watermarked text increases more rapidly than that of the watermarked text. Consequently, the difference in the two scores decreases as larger bit-width is used, leading to reduced seperability. The results are in Fig. 8. Notice that the differ-. ence between the scores of watermarked and nonwatermarked texts decreases for larger bit-width.
|
| 424 |
+
|
| 425 |
+
To grasp a hint of what is going on, we do away with the language model and other complexities by modeling this only through statistical distributions. To recap, our detection statistic (Eq. 1) was computed by aggregating the number of tokens in each position of the message. Letting $C_i$ as the number of tokens in the colorlist for the position $i$ , we can write the aggregated form as
|
| 426 |
+
|
| 427 |
+
$$
|
| 428 |
+
C = C _ {0} + \dots + C _ {p - 1} \stackrel {H _ {0}} {\sim} \operatorname {B i n o m i a l} (T, \gamma) \quad (2)
|
| 429 |
+
$$
|
| 430 |
+
|
| 431 |
+
However, note that during decoding the ground truth message is unknown and thus, is predicted by taking the colorlist that has the max number of tokens. This is problematic when decoding for non-watermarked text as it biases the statistic to be higher when bit-width is increased. We let $W_{i} = [w_{0},\dots ,w_{r - 1}]$ be the number of tokens in $r$ colorlists (strength of watermark) for position $i$ . For a non-watermarked text, we can assume that this is a random variable with equal probability for each colorlist
|
| 432 |
+
|
| 433 |
+
$$
|
| 434 |
+
W _ {i} \sim \text {M u l t i n o m i a l} \left(n _ {i}, [ \gamma \dots \gamma ]\right) \tag {3}
|
| 435 |
+
$$
|
| 436 |
+
|
| 437 |
+
where $n_i$ is the number of tokens allocated to position $i$ . Our decoding method takes the maximum cell value of this, which makes itself a random variable:
|
| 438 |
+
|
| 439 |
+
$$
|
| 440 |
+
W _ {i} ^ {\max } = \max \left(W _ {i}\right) = \max \left(\left[ w _ {0}, \dots , w _ {r - 1} \right]\right). \tag {4}
|
| 441 |
+
$$
|
| 442 |
+
|
| 443 |
+

|
| 444 |
+
Watermarked
|
| 445 |
+
Difference
|
| 446 |
+
|
| 447 |
+

|
| 448 |
+
No watermark
|
| 449 |
+
|
| 450 |
+

|
| 451 |
+
of Tokens Observed
|
| 452 |
+
Figure 8: The detection scores of non-watermarked texts, watermarked texts and their difference as a function of number of tokens observed. We see that the difference in the scores decreases as bit-width increases, leading to reduced seperability.
|
| 453 |
+
|
| 454 |
+

|
| 455 |
+
|
| 456 |
+

|
| 457 |
+
|
| 458 |
+

|
| 459 |
+
Figure 9: Simulation of the difference between (unnormalized) scores for watermarked and non-watermarked multinomial distributions. Higher score signify higher seperability, hence higher detection performance. We use $\epsilon = 0.1$ . For right, we use $\gamma = .125$ to allow more radix.
|
| 460 |
+
|
| 461 |
+
Our final statistic used for our detection score is the sum of this variable over the entire positions:
|
| 462 |
+
|
| 463 |
+
$$
|
| 464 |
+
W ^ {\max } = \sum_ {i} ^ {p} W _ {i} ^ {\max } \tag {5}
|
| 465 |
+
$$
|
| 466 |
+
|
| 467 |
+
We see that our statistic is dependent upon the number of candidates when selecting the maximum cell (i.e. radix) through Eq. 4 and the number of positions (i.e. bit-width) through Eq. 5.
|
| 468 |
+
|
| 469 |
+
To verify the effect of bit-width and radix on
|
| 470 |
+
|
| 471 |
+
the detection score, we compare the difference in the statistics for a uniform multinomial distribution, which signify non-watermarked text, and a multinomial distribution with a slightly modified probability $[\gamma +\epsilon ,\gamma ,\dots ,\gamma ]$ to signify the added bias term for the watermarked distribution. We sample 1000 samples of $W^{\mathrm{max}}$ and compute the difference in the detection scores for the two distributions. The results in Fig. 9 corroborate that an increase in bit-width / radix decreases the separability of the detection scores.
|
| 472 |
+
|
| 473 |
+
In an attempt to overhaul this, we tried computing the likelihood of $W_{i}^{\mathrm{rm}}$ before aggregating them using an approximation of (Levin, 1981) (More details in the next section). However, this only led to on par or slightly worse performance. This may be because $n_i$ is small for cases when $T$ is small compared to the length of the message. Other than this, some of the approaches we attempted were:
|
| 474 |
+
|
| 475 |
+
- Computing test statistic per position or weighting the statistic of each position with $n_i$ before aggregating.
|
| 476 |
+
- Computing the p-value of the binomial random variables rather than using the normal approximation, i.e. regularized incomplete beta function.
|
| 477 |
+
- Computing the p-value under the null hypothesis that the distribution of the colorlists follows a uniform distribution, i.e. Chi-square Goodness of Fit test
|
| 478 |
+
|
| 479 |
+
All the approaches either led to on-par or slightly worse results.
|
| 480 |
+
|
| 481 |
+
# A.2.2 Approximating Max Multinomial Cell Distribution
|
| 482 |
+
|
| 483 |
+
We used the approximation of (Levin, 1981) for modeling the distribution of the maximum cell frequency. For completeness, we present the steps used for the approximation adapted to our case. For a multinomial distribution with sample size $N$ and probability vectors $[p_0,\dots ,p_{r - 1}]$ , Let $a$ be the maximum cell value, then the cumulative distribution function of having a maximum value of $a$ can be approximated for any real number $s > 0$
|
| 484 |
+
|
| 485 |
+
$$
|
| 486 |
+
P (a) = \frac {N !}{s ^ {N} e ^ {- s}} \left\{\prod_ {i} ^ {r - 1} P \left(X _ {i} \leq a\right) \right\} P (W = N) \tag {6}
|
| 487 |
+
$$
|
| 488 |
+
|
| 489 |
+
where $X_{i} \sim \mathrm{Poisson}(sp_{i})$ and $W = \sum_{i}^{r - 1} = Y_{i} \sim \mathrm{Truncated~Poisson}(sp_{i})$ with range $0, 1, \ldots, a$ . Following Example 1 of (Levin, 1981), we set $s = N$ and use Stirling's approximation for $N!$ . We also approximate $W$ using the normal approximation to the Poisson distribution.
|
| 490 |
+
|
| 491 |
+
# A.3 List Decoding and Other Techniques
|
| 492 |
+
|
| 493 |
+
The decomposition of the message into each bit position bounds the computation during decoding to the number of tokens. This allows MPAC to output a list of most likely messages without exhaustively considering all the possible messages. We alter our decoding function to output candidate messages sorted by the level of confidence. Denoting the predicted message for position $i$ by $\hat{m}$ , and the observed number of tokens in the colored list (strength of the watermark) by $w = \mathbf{W}_i[\hat{m}]$ , the confidence of $\hat{m}$ should be higher if $w$ deviates from the expected mean under the null hypothesis that all colored lists are equally likely to be sampled. We define confidence at position $i$ as $c_i \propto \operatorname*{Pr}(W_i^{\max} \leq w|H_0)$ where $W_i^{\max}$ is the maximum cell value of $W_i \stackrel{H_0}{\sim} \text{Multinomial}(T_i, [\gamma \cdots \gamma])$ where $T_i$ is the number of tokens assigned to position $i$ . The distribution of $W_i^{\max}$ is approximated using techniques from Levin (1981) (See Appendix A.2.2).
|
| 494 |
+
|
| 495 |
+
Our algorithm can be parameterized by the confidence bound on each position:
|
| 496 |
+
|
| 497 |
+
- Input: Best prediction $\hat{\mathbf{m}}$ found by majority voting via Alg. 1, confidence bound $c_{0}$
|
| 498 |
+
Output: $\hat{\mathbf{m}}_1,\dots ,\hat{\mathbf{m}}_{|\mathbb{L}|}\in \mathbb{L}$ whose predictions are altered on positions with confidence under $c_{0}$
|
| 499 |
+
|
| 500 |
+
<table><tr><td colspan="4">Bit Accuracy</td></tr><tr><td>δ</td><td>0.5</td><td>1</td><td>2</td></tr><tr><td>No feedback</td><td>.626</td><td>.766</td><td>.948</td></tr><tr><td>δ = δ + 1</td><td>.769</td><td>.860</td><td>.960</td></tr></table>
|
| 501 |
+
|
| 502 |
+
Table 2: Results for using feedback for adapting bias on $\mathrm{T} = {100},\mathrm{\;b} = 8$
|
| 503 |
+
|
| 504 |
+
<table><tr><td></td><td colspan="5">Accuracy Gained</td></tr><tr><td></td><td>8b</td><td>16b</td><td>24b</td><td>32b</td><td></td></tr><tr><td>ci-sorted list</td><td>1.1%</td><td>3.7%</td><td>6.0%</td><td>5.6%</td><td></td></tr><tr><td>Random list</td><td>0.6%</td><td>0.4%</td><td>0.5%</td><td>0.3%</td><td></td></tr><tr><td colspan="6">Latency (seconds/250 tokens)</td></tr><tr><td></td><td>0b</td><td>8b</td><td>16b</td><td>24b</td><td>32b</td></tr><tr><td>Encoding (7.9)</td><td>8.19</td><td>7.98</td><td>8.01</td><td>7.96</td><td>8.24</td></tr><tr><td>Decoding (.09)</td><td>.08</td><td>.09</td><td>.09</td><td>.09</td><td>.10</td></tr></table>
|
| 505 |
+
|
| 506 |
+
Table 3: Comparison of absolute improvement in bit accuracy when using confidence-based list decoding and random list.
|
| 507 |
+
|
| 508 |
+
Empirically, we determine $c_{0}$ by constraining $|\mathbb{L}|$ . Note that since $\hat{\mathbf{m}}$ is always the most confident message, we comprise $\mathbb{L}$ with the next confident messages. To do this, we greedily alter the positions with the lowest confidence to the colorlist with the second largest number of tokens. Note that this list decoding technique is not unique to ours and can be applied to other methods as long as the decoding stage is computationally feasible.
|
| 509 |
+
|
| 510 |
+
# A.3.1 Results
|
| 511 |
+
|
| 512 |
+
We show absolute accuracy gained using confidence-based list decoding $(|L| = 16)$ compared with random decoding. We further compare the encoding and decoding latency for sequences with $\sim 250$ tokens using a single Nvidia A100 when using an additive left hash scheme with context width 1. The results are in Table 3. The latency does not proportionally increase with message bit length, making it scalable to long messages. When using an efficient hashing scheme watermarking has a negligible increase in both encoding and decoding compared to vanilla generation, which requires 7.9 seconds and 0.09 seconds, respectively.
|
| 513 |
+
|
| 514 |
+
# A.3.2 Error Correction Codes
|
| 515 |
+
|
| 516 |
+
We first use the basic notions from coding theory adapted from Cover (1999) to formulate our problem:
|
| 517 |
+
|
| 518 |
+
- Encoding function is a function $E: \mathcal{M} \to \mathcal{X}$ that maps the original message into longer, usually redundant string where $\mathcal{M} \subseteq$
|
| 519 |
+
|
| 520 |
+
$[r]^b, \mathcal{X} \subseteq \Sigma^T$ . The rate of $E$ is given by $\frac{b}{T} \log_2 r$ bits/symbol.
|
| 521 |
+
|
| 522 |
+
- $p(y|x)$ is a noisy channel that models the transmission of the encoded message.
|
| 523 |
+
- A channel's capacity is the upper bound of the rate of an encoding function in order for a reliable transmission.
|
| 524 |
+
- Decoding function is a function $D: \mathcal{V} \to \mathcal{M}$ that recovers the original message from $y$ .
|
| 525 |
+
|
| 526 |
+
We first simplify our setting to embedding a single-digit message $(b = 1)$ , which does not lead to a loss of generality as MPAC encodes each position independently. Each token of a language model is a signal for embedding the message $(m)$ by repetitively sampling from the $m^{\text{th}}$ colorlist. Therefore, our encoding function is a repetition code that maps a redundant message content $T$ (number of tokens) times. Our channel is the generation process of the language model, which stochastically transmits the encoded message by sampling from the vocabulary distribution that has been modified to favor the selected colorlist. The success probability of each transmission depends on the magnitude of the bias $\delta$ , the entropy of the vocabulary distribution, and, more holistically, the text distribution. The decoding function selects the argmax of the colorlist to predict the message content, i.e. majority voting. To attain a higher match rate at the expense of rate $E$ , one can use error correction codes such as (Wicker and Bhargava, 1999) prior to the encoding function and after the decoding function.
|
| 527 |
+
|
| 528 |
+
# A.3.3 Message Correction with Feedback
|
| 529 |
+
|
| 530 |
+
One key characteristic of our $p(y|x)$ is that we can instantly check whether the message was correctly transmitted by examining whether the sampled token is in the correct colorlist. This property resembles the settings of error correcting codes with feedback, in which the receiver can send feedback to the sender after receiving the message(Berlekamp, 1964; Gupta et al., 2023). One can take advantage of this property by adapting the magnitude of the bias during encoding when the majority vote of a given position differs from the actual message.
|
| 531 |
+
|
| 532 |
+
We provide some preliminary results of taking advantage of feedback during message encoding. One simple scheme is adapting the magnitude of the bias so that when the message is not correctly encoded, we enlarge the bias. Concretely, for $0 \leq$
|
| 533 |
+
|
| 534 |
+
$t \leq T$ that is allocated to position $p$ , if the current max colorlist does not match the actual message content, i.e. $\mathbf{m}[p] \neq \operatorname{argmax}_j \mathbf{W}[j]$ , we use a larger bias $\tilde{\delta} > \delta$ . The results in Table 2 show that all lead to an increase in the multi-bit accuracy. However, we observed this came with a degradation in text quality measured by automatic metrics. We leave finding better methodology as a future work.
|
| 535 |
+
|
| 536 |
+
# A.4 Extending MPAC to other methods
|
| 537 |
+
|
| 538 |
+
Block Allocation Instead of allocating a single position as done in MPAC, we can allocate a block of message, after which techniques of Cyclic-Shift can be used to encode the block message. This ensemble approach enables the prior works to embed longer messages. Deriving it name from Position Allocation, we dub this as Block Allocation.
|
| 539 |
+
|
| 540 |
+
# Block Allocation
|
| 541 |
+
|
| 542 |
+
1. Compute $s = f(X_{t - h:t - 1})$ .
|
| 543 |
+
2. Chunk message in $n$ blocks. $\mathbf{m} = [\mathbf{m}_1, \dots, \mathbf{m}_n]$ where $\mathbf{m}_n \in \Sigma^{\frac{b}{n}}$
|
| 544 |
+
3. $p\gets \mathrm{sample}([n])$ using $s$ as seed.
|
| 545 |
+
4. Run Cyclic-Shift with message as $\mathbf{m}_p$
|
| 546 |
+
|
| 547 |
+
At decoding, we predict the message for each block and concatenate them. As a preliminary experiment, we use Block Allocation with Cyclic-Shift using $n = 4$ blocks. Block Allocation can embed 24-bit messages with .901 bit accuracy (c.f. Cyclic-Shift achieves .775) and 32-bit with .871 accuracy.
|
| 548 |
+
|
| 549 |
+
Extension to Other Zero-bit Watermarking Aaronson and Kirchner (2023) is another line of work in zero bit watermarking that modifies the sampling process by generating a secret vector $\mathbf{r} \in [0,1]^{\mathcal{V}}$ based on the random seed $s$ . Given the original probability distribution $\mathbf{p}^{\mathcal{V}}$ , the token with both large $p_v$ and $\mathbf{r}_v$ is favored by choosing
|
| 550 |
+
|
| 551 |
+
$$
|
| 552 |
+
x = \operatorname {a r g m a x} _ {v \in \mathcal {V}} \mathbf {r} _ {v} ^ {1 / \mathbf {p} _ {v}}. \tag {7}
|
| 553 |
+
$$
|
| 554 |
+
|
| 555 |
+
We can adapt our position allocation method to this as well by preceding the above step with position allocation. Then, the secret key can be modified depending on the message content by the following rule:
|
| 556 |
+
|
| 557 |
+
$$
|
| 558 |
+
\mathbf {r} = \left\{ \begin{array}{l l} \mathbf {r} & \text {i f} \mathbf {m} [ p ] = 0 \\ \mathbf {1} - \mathbf {r} & \text {i f} \mathbf {m} [ p ] = 1 \end{array} \right. \tag {8}
|
| 559 |
+
$$
|
| 560 |
+
|
| 561 |
+
where $\mathbf{1}$ is a vector with 1 in all the elements. Analogous to favoring mutually exclusive colorlists, this allows favoring different tokens depending on the message content. At decoding time, we can similarly maintain a counter for each position for the two cases.
|
| 562 |
+
|
| 563 |
+
# A.5 Implementation, Hardware, Code Details
|
| 564 |
+
|
| 565 |
+
We follow (Kirchenbauer et al., 2023a) in most experimental settings. For the hashing scheme in the main paper, we use LeftHash scheme with context window $h = 1$ . In the appendix, we provide results for the SelfHash scheme. For further discussions regarding the hash scheme see Appendix A.7. To generate sequences with the desired token length $T$ , we generate with the max token set as $T$ . Then we filter out the watermarked and non-watermarked sequences with token lengths under $T_{\mathrm{low}} = T - \tau$ . We set $\tau = 25$ , except for the LFQA dataset, which was set to $\tau = 50$ as it has instructions that state to generate answers with 200-300 words. For generation, we use sampling with a temperature of 0.7. For each bit-width, a new set of generations had to be made as the length of the message differed.
|
| 566 |
+
|
| 567 |
+
For the copy-paste attack, we sample a random non-watermarked text and truncate to have the same length. Then, a position is randomly sampled to insert a $p$ percentage of the watermarked text into the non-watermarked text. We experiment with varying degrees of $p$ ( $10\% \sim 50\%$ ).
|
| 568 |
+
|
| 569 |
+
We used float16 for all our models during generation. Our experiment was run on a single NVIDIA A100. For $\mathrm{T} = 250$ , generating around 500 watermarked and non-watermarked samples took approximately 200 minutes for the left hash scheme. When using the self-cache scheme, this took significantly longer ( $\sim 550$ minutes). Our implementation is based on the official codebase of Kirchenbauer et al. (2023a): https://github.com/jwkirchenbauer/lm-watermarking.
|
| 570 |
+
|
| 571 |
+
For baselines, we use the official repository of Fernandez et al. $(2023\mathrm{a})^{\ddagger}$ and Wang et al. $(2023\mathrm{a})^{\S}$ . For Message-Hash, following the same configuration presented in their work (GPT-2 as the proxy model) cannot watermark the outputs of LLaMA-based models due to the difference in the tokenizers. Consequently, we resort to the Vanilla Marking scheme. This makes all the other factors equiva
|
| 572 |
+
|
| 573 |
+

|
| 574 |
+
Figure 10: Text quality vs. $\delta$ across bias@T=100,b=8
|
| 575 |
+
|
| 576 |
+
lent for the three methods (MPAC, Message-Hash, Cyclic-Shift) except the message encoding function $\mathcal{E}$ described in §3. Besides, we believe this has little to no effect on the watermark performance, since the use of proxy model is intended to enhance the quality of the text (in terms of perplexity) rather than the strength of the watermark.
|
| 577 |
+
|
| 578 |
+
# A.6 Metrics: Bit Accuracy, Text Quality
|
| 579 |
+
|
| 580 |
+
Text Quality Metrics We use the automatic metrics used in Kirchenbauer et al. (2023b) such as perplexity (PPL) using a larger oracle model (LLaMA2-13B) and semantic similarity based on a paraphrase model (Wieting et al., 2022, P-SP). Using P-SP, we measure the semantic similarity between the human text and watermarked text given the same prompt. While human evaluation is considered to be the golden label, our main purpose is to show that our multi-bit watermarking does not degrade the quality compared to zero-bit watermarking. Moreover, the effect of watermarking on the text quality compared to no watermarking shows promising results in human evaluations when sufficiently large models are used for open-ended generation by Kirchenbauer et al. 2023b (Appendix A.2 and A.9). Additionally, Fernandez et al. (2023a) also demonstrate that watermarking does not lead to noticeable performance degradation even on benchmarks with non-ambiguous answers such as coding and math especially with sufficiently larger models, albeit at a small bias. We further show in Fig. 10 the trade-off curve between bit accuracy and text quality. The size indicates the magnitude of bias ( $\{1, 1.5, 2, 3, 4, 5\}$ ) and horizontal dashed lines indicate non-watermarked counterparts. Analysis of text quality shows $\delta = 2$ lies at a good trade-off point.
|
| 581 |
+
|
| 582 |
+
Bit Accuracy for Multi-bit Watermark In our experiments, we used bit accuracy (error) as our metric for multi-bit watermark performance. This is a general metric that is independent of the down-
|
| 583 |
+
|
| 584 |
+
stream application or the encoding scheme. However, computing the exact match of a message should be done dependent on the context. To illustrate this, we start with some examples. First, consider the case where the encoding scheme to identify users is simply assigning a message to each user. Then, by embedding 4-bit message one can encode $2^{4}$ different users: $\mathbf{m} = \mathrm{'0000'}$ for Bob, $\mathbf{m} = \mathrm{'0001'}$ for Alice, and so on. For such a scenario, one might be interested in computing the exact match of the 4-bit message, also known as the packet error ratio. While this encoding scheme enables tracing back to the exact users at low load capacity, this is extremely inflexible as it cannot handle influx or outflux of users.
|
| 585 |
+
|
| 586 |
+
Conversely, one can turn to a more flexible encoding scheme by encoding each character. Using UTF-8, this requires 8 bits per character, which would mean 40 bits is required just for encoding 5 character user ID. For this scenario, one might be more interested in computing the packet error ratio of each character or the entire 40-bit message. A more realistic encoding scheme will be somewhere between the middle, which uses a more efficient representation, e.g. by merging often-used bytes as done in Byte pair encoding (Gage, 1994). Added with error correction codes such as the Reed-Solomon code (Wicker and Bhargava, 1999), this allows a more robust representation. Since focusing on a single type of encoding scheme – and more fundamentally, what information to embed – narrows down the potential applications, we present bit accuracy in our main experiments as done in previous works in the literature (Zhu et al., 2018; Luo et al., 2020; Yang et al., 2022; Yoo et al., 2023; Fernandez et al., 2023b). For $T = 250$ , the packet error ratio for the 8-bit message was $7.1\%$ , which is $+5.7\%$ higher than the bit error rate. With 16-list decoding, this is reduced to $2.4\%$ .
|
| 587 |
+
|
| 588 |
+
Another metric considered in Table III of Fernandez et al. (2023a) was combining the detection scheme and packet error ratio. In this scenario, they assume an encoding scheme of assigning each user to a single message and compute the percentage of finding the exact user given a fixed false positive rate. At $\mathrm{FPR} = 1e^{-3}$ and using 8-bit message (256 users), we can correctly identify $90.5\%$ cases. Our true positive rate was computed by the setting used in Table 8.
|
| 589 |
+
|
| 590 |
+
<table><tr><td colspan="5">Ratio Sampled Position (Sorted)</td></tr><tr><td>LeftHash (h=1)</td><td>0.319</td><td>0.251</td><td>0.235</td><td>0.195</td></tr><tr><td>SelfHash (h=4)</td><td>0.264</td><td>0.257</td><td>0.242</td><td>0.238</td></tr></table>
|
| 591 |
+
|
| 592 |
+
Table 4: Ratio of the sampled position for $b = 8, r = 4$ (four positions total) for the two hashing schemes for position allocation.
|
| 593 |
+
|
| 594 |
+
# A.7 Discussion on the Hashing Scheme
|
| 595 |
+
|
| 596 |
+
The hashing scheme for generating the seed plays a significant role in watermarking. For our MPAC, the hashing scheme is employed once for position allocation and once for permuting the vocabulary list. Here, we discuss some implications of the design choices.
|
| 597 |
+
|
| 598 |
+
To recap, the function $f(X_{t - h:t - 1})$ is used to hash $h$ most recent tokens before generating the $t^{\text{th}}$ token. Following the terminology of Kirchenbauer et al. (2023b), LeftHash takes the leftmost token, while SelfHash is determined in a slightly more complex way that is dependent on the $t^{\text{th}}$ token (see Algorithm 1 of Kirchenbauer et al. (2023b)). The context width and the hashing scheme determine robustness and quality (diversity) trade-offs. For our experiments, we use the two configurations (LeftHash with $h = 1$ and SelfHash with $h = 4$ ) proposed in the previous work found to be effective in the two aspects without further fine-tuning.
|
| 599 |
+
|
| 600 |
+
As expected by the trade-off, the perplexity was slightly higher for LeftHash compared to SelfHash (5.1 vs. 4.9 on average for 250 tokens), while PSP was at the same level. One clear distinction between the two schemes was the encoding time latency. As SelfHash iteratively searches for tokens, this took significantly longer than the LeftHash scheme, which had nearly no overhead compared to no watermarking (appendix A.5 and Table 3). In addition, we observed that the sampled positions were not uniform for LeftHash with $h = 1$ as shown in Tab. 4 due to the reduced diversity of the tokens in the context width. Despite this, the multi-bit performance was similar for the two schemes (Table 13 and 14). A possible direction for improvement may be using different hashing schemes for position allocation (more robust) and vocabulary partitioning (more quality-focused).
|
| 601 |
+
|
| 602 |
+
# A.8 More on Robustness: Other Attacks, Detection
|
| 603 |
+
|
| 604 |
+
We also test our watermark against DIPPER (Krishna et al., 2023), which is a specialized para
|
| 605 |
+
|
| 606 |
+

|
| 607 |
+
Figure 11: AUC vs. number of tokens observed when corrupted with copy-paste attack for 8-bit message.
|
| 608 |
+
|
| 609 |
+

|
| 610 |
+
|
| 611 |
+

|
| 612 |
+
Figure 12: Corrupted bit accuracy for (a) copy-paste attack controlled by the human text percentage at T-250 and (b) paraphrasing attack using GPT-3.5 embedding 8-bit messages at varying token lengths. For (b), we show multiple sizes of list $(|L|\in \{2,4,8,16\})$ by color gradation as 8-bit has relatively small output space.
|
| 613 |
+
|
| 614 |
+

|
| 615 |
+
Figure 13: Multi-bit performance across datasets and model sizes.
|
| 616 |
+
|
| 617 |
+
<table><tr><td colspan="5">Bit Acc. after Paraphrasing with DIPPER</td></tr><tr><td>Bit-width</td><td>8</td><td>16</td><td>24</td><td>32</td></tr><tr><td>Best Prediction</td><td>.922 (.13)</td><td>.825 (.12)</td><td>.778 (.12)</td><td>.736 (.10)</td></tr><tr><td>16-List Decoded</td><td>.982 (.05)</td><td>.924 (.08)</td><td>.864 (.10)</td><td>.801 (.09)</td></tr></table>
|
| 618 |
+
|
| 619 |
+
Table 5: Robustness under paraphrasing using DIPPER (Lexical diveristy=20)
|
| 620 |
+
|
| 621 |
+
<table><tr><td rowspan="2"></td><td rowspan="2">GPT-3.5</td><td colspan="4">DIPPER</td></tr><tr><td>Lex.=20</td><td>Lex.=40</td><td>Lex.=60</td><td>Lex.=60 Ordering=60</td></tr><tr><td>P-SP</td><td>.815</td><td>.933</td><td>.897</td><td>.844</td><td>.827</td></tr><tr><td>Absolute Change in # of Words</td><td>36</td><td>13</td><td>16</td><td>19</td><td>20</td></tr><tr><td>Bit Acc.</td><td>.733</td><td>.922</td><td>.849</td><td>.757</td><td>.719</td></tr></table>
|
| 622 |
+
|
| 623 |
+
Table 6: Comparison of the two paraphrasing method on text quality.
|
| 624 |
+
|
| 625 |
+
<table><tr><td colspan="2">Robustness on GPT-3.5 paraphrasing</td></tr><tr><td>Methods</td><td>Bit Acc.</td></tr><tr><td>MPAC</td><td>.733 (.19)</td></tr><tr><td>Cyclic-Shift*</td><td>.655 (.25)</td></tr><tr><td>Message-Hash*</td><td>.540 (.24)</td></tr></table>
|
| 626 |
+
|
| 627 |
+
Table 7: Bit accuracy after paraphrasing on $T = {250}$ , $b = 8$ . For Cyclic-Shiftand Message-Hash,we use 100 samples due to resource constraint on GPT-3.5.
|
| 628 |
+
|
| 629 |
+
phrasing model. DIPPER is parameterized by two scalers, which control lexical diversity and token order diversity. We first present the results across bit-width with a lexical diversity of 20 (out of 100). We see that the watermark fares considerably better than using GPT-3 attack in Table 5.
|
| 630 |
+
|
| 631 |
+
To see the magnitude of semantic drift of the two paraphrasing methods, we compute the P-SP between the original watermarked text and its paraphrased counterpart. We also compute the absolute change in the number of words. Table 6 demonstrates that paraphrasing using GPT-3.5 changes the semantic and the number of words greater than the setting used in Table 5, which may explain why the multi-bit watermark performance is lower for GPT-3.5. When we control the diversity parameters of DIPPER, this is able to degrade the watermark performance as well as GPT-3.5.
|
| 632 |
+
|
| 633 |
+
Some other forms of possible attacks considered in the literature are word substitution, insertion, and deletion. Word substition is very similar to the copy-paste attack considered in the main paper. Our watermark scheme is also robust to partial insertion and deletion of words as MPAC relies on
|
| 634 |
+
|
| 635 |
+
the local context to synchronize the positions of the message and the ordering of the vocabulary.
|
| 636 |
+
|
| 637 |
+
Robustness of zero-bit Watermark Here we provide results for the detection performance under corruption. We use the copy-paste attack with the attack percentage ranges of $\{10\%, 20\%, 30\%, 40\%, 50\}$ and compare the AUC vs. number of tokens observed curve similar to Fig. 11. While the detectability is noticeably affected, the final AUC is recovered to a large degree only after observing 250 tokens. In order of the attack strength, the final AUC's are .992, .987, .980, .971, .942, respectively. For the zero-bit counterpart, all the scores are over .990.
|
| 638 |
+
|
| 639 |
+
# A.9 Ablations on Datasets and Model Sizes
|
| 640 |
+
|
| 641 |
+
We show additional results on other datasets and model sizes in Fig. 13. C4 news-like subset is the dataset we used for our main experiment. "Long-form Question-Answering" (LFQA) is a dataset curated by Krishna et al. (2023) on the Reddit's "Explain Like I'm Five" (ELI5) forum. The Essays dataset comprises pairs of instructions and essays (Schuhmann, 2022). Wikitext (Merit et al., 2016) comprises Wikipedia article. We use the 'wikitext-2' subset. For LFQA, we use the finetuned version, LLaMA-2-Chat, specialized for chats as they explicitly have questions or instructions as prompts.
|
| 642 |
+
|
| 643 |
+
It is apparent that the watermark performance is affected by the text distribution. When the entropy of the vocabulary distribution is low (low diversity), there is little room for encoding the message with a fixed bias, which has been observed in zero-bit watermarking as well where the watermark
|
| 644 |
+
|
| 645 |
+
<table><tr><td rowspan="2">Bit-width</td><td colspan="5">True Positive Rate</td></tr><tr><td>0</td><td>8</td><td>16</td><td>24</td><td>32</td></tr><tr><td>FPR=1e-2</td><td>0.999</td><td>0.986</td><td>0.974</td><td>0.964</td><td>0.958</td></tr><tr><td>FPR=1e-3</td><td>0.997</td><td>0.974</td><td>0.956</td><td>0.943</td><td>0.915</td></tr><tr><td>FPR=1e-4</td><td>0.997</td><td>0.96</td><td>0.934</td><td>0.905</td><td>0.88</td></tr><tr><td>FPR=1e-5</td><td>0.994</td><td>0.951</td><td>0.907</td><td>0.851</td><td>0.793</td></tr></table>
|
| 646 |
+
|
| 647 |
+
Table 8: True positive rate at a fixed false positive rate across bit-widths. We use $\sim 500$ positive sample and $\sim 100,000$ negative samples. We only count the unique tokens following (Kirchenbauer et al., 2023a; Fernandez et al., 2023a). This has an effect of removing outlier human text samples that have exceptionally high scores.
|
| 648 |
+
|
| 649 |
+
performance suffers for low entropy text distributions such as coding (Lee et al., 2023; Kirchenbauer et al., 2023b). For our multi-bit case, this means the load capacity is inherently low for such text distributions. This is especially observed for LFQA, in which the model consistently starts the response by restating the question (e.g. "The reason for [Question] is ..."). Across the model scale, the trend is not as apparent although we found that the largest model consistently has a lower performance. This hints that the entropy of the vocabulary distribution is lower for the largest model, which might explain the higher text quality in general when we increase the model size. Larger models might have the capacity to form high-quality sequences even when the text distribution is altered by increasing the entropy via temperature or explicitly increasing the magnitude of the bias during watermarking. We leave this as a future work.
|
| 650 |
+
|
| 651 |
+
# A.10 Tabular Results
|
| 652 |
+
|
| 653 |
+
Here we present the numerical results of the experiments done in the main paper. Numbers in the parenthesis signify the standard deviation.
|
| 654 |
+
|
| 655 |
+
- Table 9 and $10 \leftrightarrow$ Figure 3 show the comparisons with baseline methods.
|
| 656 |
+
- Table 11 $\leftrightarrow$ Figure 10 show the relationship between $\delta$ vs. text quality and watermark strength.
|
| 657 |
+
- Table ${12} \leftrightarrow$ Figure 5 left compare the different configurations of radix and colorlist proportion.
|
| 658 |
+
- Table 13 $\leftrightarrow$ Figure 5 left show the multibit watermark performance on a fixed token length.
|
| 659 |
+
- Table 14 $\leftrightarrow$ Figure 5 right show the multibit watermark performance on a fixed load capacity (bits per token).
|
| 660 |
+
|
| 661 |
+
- Table 15 $\leftrightarrow$ Figure 5a show the multibit watermark performance under copy-paste corruption.
|
| 662 |
+
- Table 16 $\leftrightarrow$ Figure 5b show the multibit watermark performance under paraphrasing.
|
| 663 |
+
|
| 664 |
+
# A.11 Generation Samples
|
| 665 |
+
|
| 666 |
+
We show below in Table 17 generated samples.
|
| 667 |
+
|
| 668 |
+
<table><tr><td></td><td colspan="4">B=8,T=250</td></tr><tr><td>Copy-Paste (p)</td><td>Clean</td><td>cp=10%</td><td>cp=30%</td><td>cp=50%</td></tr><tr><td>Ours</td><td>.986 (.06)</td><td>.981 (.07)</td><td>.956 (.10)</td><td>.900 (.13)</td></tr><tr><td>FCT+EMS</td><td>.979 (.10)</td><td>.943 (.17)</td><td>.858 (.24)</td><td>.800 (.28)</td></tr><tr><td>FCT+Greenlist</td><td>.995 (.05)</td><td>.988 (.08)</td><td>.970 (.12)</td><td>.908 (.20)</td></tr><tr><td>CTWL</td><td>.977 (.11)</td><td>.973 (.12)</td><td>.951(.16)</td><td>.858(.24)</td></tr></table>
|
| 669 |
+
|
| 670 |
+
Table 9: Comparison of multibit watermark performance with other methods on clean and corrupted settings. For corruption, we use the copy-paste attack. *The load capacity of FCT+Greenlist is limited to 15-bit.
|
| 671 |
+
|
| 672 |
+
<table><tr><td></td><td colspan="4">B=16,T=250</td><td colspan="4">B=24,T=250</td></tr><tr><td>Copy-Paste (p)</td><td>Clean</td><td>cp=10%</td><td>cp=30%</td><td>cp=50%</td><td>Clean</td><td>cp=10%</td><td>cp=30%</td><td>cp=50%</td></tr><tr><td>Ours</td><td>.951 (.07)</td><td>.939 (.08)</td><td>.887 (.09)</td><td>.819 (.12)</td><td>.899 (.09)</td><td>.882 (.09)</td><td>.830 (.10)</td><td>.755 (.11)</td></tr><tr><td>FCT+EMS</td><td>.905 (.20)</td><td>.811 (.26)</td><td>.702 (.26)</td><td>.601 (.23)</td><td>.775 (.26)</td><td>.729 (.24)</td><td>.633 (.23)</td><td>.513 (.13)</td></tr><tr><td>CTWL</td><td>.936 (.18)</td><td>.909 (.20)</td><td>.810 (.26)</td><td>.614 (.22)</td><td>.876 (.22)</td><td>.828 (.25)</td><td>.663 (.26)</td><td>.516 (16)</td></tr></table>
|
| 673 |
+
|
| 674 |
+
Table 10: Comparison of multibit watermark performance with other methods on clean and corrupted settings.
|
| 675 |
+
|
| 676 |
+
<table><tr><td>δ</td><td>0.5</td><td>1</td><td>1.5</td><td>2</td><td>3</td><td>4</td><td>5</td></tr><tr><td>Bit Acc.</td><td>.626 (.19)</td><td>.766 (.18)</td><td>.887 (.15)</td><td>.947 (.11)</td><td>.982 (.08)</td><td>.993 (.05)</td><td>.995 (.05)</td></tr><tr><td>P-SP (w/ reference)</td><td>.385 (.15)</td><td>.379 (.15)</td><td>.372 (.15)</td><td>.371 (.15)</td><td>.360 (.14)</td><td>.336 (.13)</td><td>.319 (.13)</td></tr><tr><td>P-SP (w/ non-wm.)</td><td>.526 (.18)</td><td>.460 (.16)</td><td>.433 (.15)</td><td>.417 (.15)</td><td>.388 (.14)</td><td>.349 (.14)</td><td>.330 (.13)</td></tr><tr><td>PPL</td><td>4.41 (1.5)</td><td>4.64 (1.8)</td><td>5.01 (2.0)</td><td>5.6 (2.0)</td><td>7.41 (2.7)</td><td>10.3 (4.1)</td><td>13.67 (5.9)</td></tr></table>
|
| 677 |
+
|
| 678 |
+
Table 11: Bit accuracy and text quality on embedding 8 bit-width message on $\mathrm{T} = {250}$ across various magnitudes of bias $\delta$ .
|
| 679 |
+
|
| 680 |
+
<table><tr><td colspan="5">Bit Accuracy @ T=250</td></tr><tr><td>Bit</td><td>8</td><td>16</td><td>24</td><td>32</td></tr><tr><td>γ=.25,r=4</td><td>.986 (.06)</td><td>.951 (.07)</td><td>.900 (.09)</td><td>.871 (0.08)</td></tr><tr><td>γ=.25,r=2</td><td>.966 (.07)</td><td>.905 (.08)</td><td>.858 (.08)</td><td>0.820 (.08)</td></tr><tr><td>γ=.50,r=2</td><td>.978 (.05)</td><td>.922 (.07)</td><td>.875 (.08)</td><td>0.849 (.07)</td></tr></table>
|
| 681 |
+
|
| 682 |
+
Table 12: Multibit watermark performance measured by bit accuracy for varying configurations of colorlist proportion and radix.
|
| 683 |
+
|
| 684 |
+
<table><tr><td colspan="5">Bit Acc. @ T=250</td></tr><tr><td>Bit</td><td>8</td><td>16</td><td>24</td><td>32</td></tr><tr><td>LeftHash(h = 1)</td><td>.986 (.06)</td><td>.951 (.07)</td><td>.900 (.09)</td><td>.871 (0.08)</td></tr><tr><td>SelfHash(h = 4)</td><td>.976 (.08)</td><td>.905 (.08)</td><td>.895 (.09)</td><td>.862 (.09)</td></tr></table>
|
| 685 |
+
|
| 686 |
+
Table 13: Bit accuracy for two different hash schemes for a fixed token length.
|
| 687 |
+
|
| 688 |
+
<table><tr><td colspan="6">Bit Acc. @ BPT=.064</td></tr><tr><td>T</td><td>63</td><td>125</td><td>250</td><td>500</td><td>1000</td></tr><tr><td>Bit</td><td>4</td><td>8</td><td>16</td><td>32</td><td>64</td></tr><tr><td>LeftHash(h = 1)</td><td>.961 (.13)</td><td>.958 (.09)</td><td>.951 (.07)</td><td>.913 (.08)</td><td>.846 (.09)</td></tr><tr><td>SelfHash(h = 4)</td><td>.952 (.13)</td><td>.953 (.10)</td><td>.945 (.08)</td><td>.911 (.08)</td><td>.850 (.08)</td></tr></table>
|
| 689 |
+
|
| 690 |
+
Table 14: Bit accuracy for two different hash schemes for a fixed bits per token.
|
| 691 |
+
|
| 692 |
+
<table><tr><td rowspan="2" colspan="2">Attack Strength</td><td colspan="6">Copy-paste Attack</td></tr><tr><td>Clean</td><td>10%</td><td>20%</td><td>30%</td><td>40%</td><td>50%</td></tr><tr><td rowspan="2">8-bit</td><td>Best</td><td>.986 (.06)</td><td>.981 (.07)</td><td>0.971 (.08)</td><td>.956 (.10)</td><td>.938 (.12)</td><td>.900 (.13)</td></tr><tr><td>+16-List</td><td>.997 (.02)</td><td>.997 (.02)</td><td>.995 (.03)</td><td>.993 (.03)</td><td>.991 (.04)</td><td>.980 (.05)</td></tr><tr><td rowspan="2">16-bit</td><td>Best</td><td>.951 (.07)</td><td>.939 (.08)</td><td>.918 (.09)</td><td>.887 (.09)</td><td>.858 (.11)</td><td>.819 (.12)</td></tr><tr><td>+16-List</td><td>.988 (0.04)</td><td>.983 (.04)</td><td>.978 (.05)</td><td>.964 (.06)</td><td>.947 (.07)</td><td>.918 (.08)</td></tr><tr><td rowspan="2">24-bit</td><td>Best</td><td>.899 (.09)</td><td>.882 (.09)</td><td>.858 (.10)</td><td>.830 (.10)</td><td>.797 (.11)</td><td>.755 (.11)</td></tr><tr><td>+16-List</td><td>.959 (.06)</td><td>.944 (.06)</td><td>.927 (.08)</td><td>.907 (.08)</td><td>.879 (.09)</td><td>.840 (.09)</td></tr><tr><td rowspan="2">32-bit</td><td>Best</td><td>.871 (.08)</td><td>.851 (.09)</td><td>.828 (.09)</td><td>.801 (.09)</td><td>.765 (.09)</td><td>.723 (.1)</td></tr><tr><td>+16-List</td><td>.927 (.07)</td><td>.910 (.08)</td><td>.888 (.08)</td><td>.863 (.08)</td><td>.831 (.09)</td><td>.792 (.09)</td></tr></table>
|
| 693 |
+
|
| 694 |
+
Table 15: Robustness when certain percentage of human text is mixed into the watermarked text.
|
| 695 |
+
|
| 696 |
+
<table><tr><td colspan="5">GPT-3.5 Paraphrasing</td></tr><tr><td colspan="2">Token Length</td><td>250T</td><td>400T</td><td>500T</td></tr><tr><td rowspan="5">8-bit</td><td>Best</td><td>.733 (.19)</td><td>.792 (.19)</td><td>.795 (.19)</td></tr><tr><td>+2-List</td><td>.825 (.16)</td><td>.874 (.15)</td><td>.875 (.15)</td></tr><tr><td>+4-List</td><td>.856 (.14)</td><td>.894 (.13)</td><td>.898 (.13)</td></tr><tr><td>+8-List</td><td>.893 (.12)</td><td>.924 (.11)</td><td>.928 (.11)</td></tr><tr><td>+16-List</td><td>.911 (.10)</td><td>.934 (.09)</td><td>.939 (.09)</td></tr></table>
|
| 697 |
+
|
| 698 |
+
# Prompt
|
| 699 |
+
|
| 700 |
+
"As an expert copy-editor, please rewrite the following text in your own voice while ensuring that the final output contains the same information as the original text and has roughly the same length. Please paraphrase all sentences and do not omit any crucial details. Additionally, please take care to provide any relevant information about public figures, organizations, or other entities mentioned in the text to avoid any potential misunderstandings or biases."
|
| 701 |
+
|
| 702 |
+
Table 16: Robustness when paraphrased with GPT-3.5.
|
| 703 |
+
|
| 704 |
+
<table><tr><td>Prompt</td><td>Non-Watermarked</td><td>Watermarked</td></tr><tr><td>A blind man in his 60s is searching for the young man who pulled him back from an approaching train and saved his life last Thursday, Nov. 9. Mike Wyatt stood at the stairs to the Peoria train station in Aurora, Colorado, on Monday, looking for the young man who saved his life days earlier</td><td>"He told me he was 20 years old and he was going to college," Wyatt told reporters. "He said he was going to be a doctor." Wyatt, who is blind and has been for most of his life, was at the train station ready to catch a train to the airport when he slipped and almost fell in front of an oncoming train.[...]</td><td>and has given him new life. Inside the station, he spoke to the counter clerk, hoping the clerk might have information that might lead him to the young man, who jumped at the chance to help Wyatt when he saw he was in danger. But, he said, he found nothing.[...]</td></tr><tr><td>[...] Shortly after the release of the lead single "Life's About to Get Good," it hit number one on the Billboard 200. Of course, fans were simply dying to see how their favourite country icon was doing, so</td><td>the album was released on CD, vinyl, and digital. But, Shania took things to the next level and released a deluxe version with added tracks. [...]</td><td>the song was released a few months earlier in June. "OK so I am sooooo proud that my song Life's About to Get Good is now the number one song on Billboard country," she tweeted back in June.[...]</td></tr><tr><td>[...] Amazon's offices will be located within an already-established special tax district where a portion of the property tax revenue goes toward infrastructure improvements such as parks and wider sidewalks. The incentive agreement says that half of any new revenue from that district starting in 202</td><td>3 would go to Amazon to help pay for employee benefits and transportation, but the county has said it would not be required to pay those benefits.[...]</td><td>7 would go to Amazon through a 15-year payment schedule. This amount would grow to more than 60 percent if the company exceeds 10.24 million square feet of office space. Amazon is expected to reach that level in 2027.[...]</td></tr><tr><td>[...] "Below the surface of the Sun a dynamo process is working creating magnetic field," Harra explained in an email interview. "When this becomes buoyant it can rise to the surface of the Sun,</td><td>and this is what we call a sunspot." The new research shows that the initial explosion was actually a series of smaller blasts that combined to create the X-7 flare.</td><td>and form coronal holes and sunspots. This happens in a fairly homogeneous area on the Sun, so there can be several sunspots with a single magnetic field underneath.</td></tr><tr><td>[...]The merge listing the most important changes to Linux 3.8's sound sub-system includes some other changes to audio drivers. The kernel now includes a driver for human interface devices (HIDs) that use I2C (1, 2 and others), using the "HID over I2C" protocol designed by Microsoft and implemented in WindowsÅ</td><td>7 and later versions of the operating system. The kernel now has a driver for the Samsung Galaxy S III smartphone's touchscreen (1, 2 and others), and the rt2800usb driver, for the RaLink RT2800USB WLAN chip, now supports devices that have Blue-tooth 3.0 (1, 2).[...]</td><td>7 and Windows Vista. The drivers can read out data from HIDs and set the appropriate commands to them. An example of such a device is a BT-USB adapter. The sound subsystem now supports two new, high-quality audio CODES (1, 2):[...]</td></tr></table>
|
| 705 |
+
|
| 706 |
+
Table 17: Randomly sampled examples of watermarked texts on the C4 newslike subset with ${100}\%$ bit accuracy. Samples are truncated for readability.
|
2024/Advancing Beyond Identification_ Multi-bit Watermark for Large Language Models/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:74ce7c671a00b20408967503900b6ab4c4b99ffd11bee3945f9351f8f90bf459
|
| 3 |
+
size 1301979
|
2024/Advancing Beyond Identification_ Multi-bit Watermark for Large Language Models/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/AfriMTE and AfriCOMET_ Enhancing COMET to Embrace Under-resourced African Languages/ea381c68-7b23-4741-a516-c5b3e5daf8a5_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|