parent_paper_title
stringclasses
63 values
parent_paper_arxiv_id
stringclasses
63 values
citation_shorthand
stringlengths
2
56
raw_citation_text
stringlengths
9
63
cited_paper_title
stringlengths
5
161
cited_paper_arxiv_link
stringlengths
32
37
cited_paper_abstract
stringlengths
406
1.92k
has_metadata
bool
1 class
is_arxiv_paper
bool
2 classes
bib_paper_authors
stringlengths
2
2.44k
bib_paper_year
float64
1.97k
2.03k
bib_paper_month
stringclasses
16 values
bib_paper_url
stringlengths
20
116
bib_paper_doi
stringclasses
269 values
bib_paper_journal
stringlengths
3
148
original_title
stringlengths
5
161
search_res_title
stringlengths
4
122
search_res_url
stringlengths
22
267
search_res_content
stringlengths
19
1.92k
Dual Debiasing for Noisy In-Context Learning for Text Generation
2506.00418v1
min2022rethinking
\cite{min2022rethinking}
Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?
http://arxiv.org/abs/2202.12837v2
Large language models (LMs) are able to in-context learn -- perform a new task via inference alone by conditioning on a few input-label pairs (demonstrations) and making predictions for new inputs. However, there has been little understanding of how the model learns and which aspects of the demonstrations contribute to...
true
true
Min, Sewon and Lyu, Xinxi and Holtzman, Ari and Artetxe, Mikel and Lewis, Mike and Hajishirzi, Hannaneh and Zettlemoyer, Luke
2,022
null
null
null
arXiv preprint arXiv:2202.12837
Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?
[PDF] What Makes In-Context Learning Work? - ACL Anthology
https://aclanthology.org/2022.emnlp-main.759.pdf
Rethinking the Role of Demonstrations: What Makes In-Context Learning Work? Large language models (LMs) are able to in- context learn—perform a new task via
Dual Debiasing for Noisy In-Context Learning for Text Generation
2506.00418v1
kang2024context
\cite{kang2024context}
In-Context Learning with Noisy Labels
http://arxiv.org/abs/2411.19581v1
In-context learning refers to the emerging ability of large language models (LLMs) to perform a target task without additional training, utilizing demonstrations of the task. Recent studies aim to enhance in-context learning performance by selecting more useful demonstrations. However, they overlook the presence of ine...
true
true
Kang, Junyong and Son, Donghyun and Song, Hwanjun and Chang, Buru
2,024
null
null
null
arXiv preprint arXiv:2411.19581
In-Context Learning with Noisy Labels
[2411.19581] In-Context Learning with Noisy Labels - arXiv
https://arxiv.org/abs/2411.19581
In this paper, we propose a new task, in-context learning with noisy labels, which aims to solve real-world problems for in-context learning.
Dual Debiasing for Noisy In-Context Learning for Text Generation
2506.00418v1
gao2024noise
\cite{gao2024noise}
On the Noise Robustness of In-Context Learning for Text Generation
http://arxiv.org/abs/2405.17264v3
Large language models (LLMs) have shown impressive performance on downstream tasks by in-context learning (ICL), which heavily relies on the quality of demonstrations selected from a large set of annotated examples. Recent works claim that in-context learning is robust to noisy demonstrations in text classification. In...
true
true
Gao, Hongfu and Zhang, Feipeng and Jiang, Wenyu and Shu, Jun and Zheng, Feng and Wei, Hongxin
2,024
null
null
null
null
On the Noise Robustness of In-Context Learning for Text Generation
On the Noise Robustness of In-Context Learning for Text ...
https://openreview.net/forum?id=00uVk06eVK&referrer=%5Bthe%20profile%20of%20Hongxin%20Wei%5D(%2Fprofile%3Fid%3D~Hongxin_Wei1)
The paper "On the Noise Robustness of In-Context Learning for Text Generation" investigates how LLMs handle noisy annotations during in-context
Dual Debiasing for Noisy In-Context Learning for Text Generation
2506.00418v1
li2022contrastive
\cite{li2022contrastive}
Contrastive Decoding: Open-ended Text Generation as Optimization
http://arxiv.org/abs/2210.15097v2
Given a language model (LM), maximum probability is a poor decoding objective for open-ended generation, because it produces short and repetitive text. On the other hand, sampling can often produce incoherent text that drifts from the original topics. We propose contrastive decoding (CD), a reliable decoding approach t...
true
true
Li, Xiang Lisa and Holtzman, Ari and Fried, Daniel and Liang, Percy and Eisner, Jason and Hashimoto, Tatsunori and Zettlemoyer, Luke and Lewis, Mike
2,022
null
null
null
arXiv preprint arXiv:2210.15097
Contrastive Decoding: Open-ended Text Generation as Optimization
Contrastive Decoding: Open-ended Text Generation as Optimization
https://arxiv.org/abs/2210.15097
We propose contrastive decoding (CD), a reliable decoding approach that optimizes a contrastive objective subject to a plausibility constraint.
Dual Debiasing for Noisy In-Context Learning for Text Generation
2506.00418v1
zhao2024enhancing
\cite{zhao2024enhancing}
Enhancing Contextual Understanding in Large Language Models through Contrastive Decoding
http://arxiv.org/abs/2405.02750v1
Large language models (LLMs) tend to inadequately integrate input context during text generation, relying excessively on encoded prior knowledge in model parameters, potentially resulting in generated text with factual inconsistencies or contextually unfaithful content. LLMs utilize two primary knowledge sources: 1) pr...
true
true
Zhao, Zheng and Monti, Emilio and Lehmann, Jens and Assem, Haytham
2,024
null
null
null
arXiv preprint arXiv:2405.02750
Enhancing Contextual Understanding in Large Language Models through Contrastive Decoding
Enhancing Contextual Understanding in Large Language Models ...
https://aclanthology.org/2024.naacl-long.237/
We introduce a novel approach integrating contrastive decoding with adversarial irrelevant passages as negative samples to enhance robust context grounding
Dual Debiasing for Noisy In-Context Learning for Text Generation
2506.00418v1
fei2023mitigating
\cite{fei2023mitigating}
Mitigating Label Biases for In-context Learning
http://arxiv.org/abs/2305.19148v3
Various design settings for in-context learning (ICL), such as the choice and order of the in-context examples, can bias a model toward a particular prediction without being reflective of an understanding of the task. While many studies discuss these design choices, there have been few systematic investigations into ca...
true
true
Fei, Yu and Hou, Yifan and Chen, Zeming and Bosselut, Antoine
2,023
null
null
null
arXiv preprint arXiv:2305.19148
Mitigating Label Biases for In-context Learning
[2305.19148] Mitigating Label Biases for In-context Learning - arXiv
https://arxiv.org/abs/2305.19148
In this work, we define a typology for three types of label biases in ICL for text classification: vanilla-label bias, context-label bias, and domain-label
Dual Debiasing for Noisy In-Context Learning for Text Generation
2506.00418v1
zhao2021calibrate
\cite{zhao2021calibrate}
Calibrate Before Use: Improving Few-Shot Performance of Language Models
http://arxiv.org/abs/2102.09690v2
GPT-3 can perform numerous tasks when provided a natural language prompt that contains a few training examples. We show that this type of few-shot learning can be unstable: the choice of prompt format, training examples, and even the order of the training examples can cause accuracy to vary from near chance to near sta...
true
true
Zhao, Zihao and Wallace, Eric and Feng, Shi and Klein, Dan and Singh, Sameer
2,021
null
null
null
null
Calibrate Before Use: Improving Few-Shot Performance of Language Models
Calibrate Before Use: Improving Few-Shot Performance of ...
http://proceedings.mlr.press/v139/zhao21c/zhao21c.pdf
by Z Zhao · 2021 · Cited by 1608 — Overall, contextual calibration is a simple method that makes language models better few-shot learners: it enables end users to obtain higher accuracy with.
Don't Reinvent the Wheel: Efficient Instruction-Following Text Embedding based on Guided Space Transformation
2505.24754v1
NIPS2013_9aa42b31
\cite{NIPS2013_9aa42b31}
Distributed Representations of Words and Phrases and their Compositionality
http://arxiv.org/abs/1310.4546v1
The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the traini...
true
true
Tom{\'{a}}s Mikolov and Ilya Sutskever and Kai Chen and Gregory S. Corrado and Jeffrey Dean
2,013
null
https://proceedings.neurips.cc/paper/2013/hash/9aa42b31882ec039965f3c4923ce901b-Abstract.html
null
null
Distributed Representations of Words and Phrases and their Compositionality
[PDF] Distributed Representations of Words and Phrases and their ...
https://proceedings.neurips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf
Distributed representations of words use vector spaces to group similar words, capturing syntactic and semantic relationships, and are limited by their
Don't Reinvent the Wheel: Efficient Instruction-Following Text Embedding based on Guided Space Transformation
2505.24754v1
pennington-etal-2014-glove
\cite{pennington-etal-2014-glove}
Glove: Global Vectors for Word Representation
null
null
true
false
Jeffrey Pennington and Richard Socher and Christopher D. Manning
2,014
null
https://doi.org/10.3115/v1/d14-1162
10.3115/V1/D14-1162
null
Glove: Global Vectors for Word Representation
GloVe: Global Vectors for Word Representation
https://nlp.stanford.edu/projects/glove/
GloVe: Global Vectors for Word Representation GloVe: Global Vectors for Word RepresentationJeffrey Pennington, Richard Socher, Christopher D. GloVe: Global Vectors for Word Representation. GloVe is designed in order that such vector differences capture as much as possible the meaning specified by the juxtaposition of t...
Don't Reinvent the Wheel: Efficient Instruction-Following Text Embedding based on Guided Space Transformation
2505.24754v1
transformer
\cite{transformer}
Attention Is All You Need
http://arxiv.org/abs/1706.03762v7
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on at...
true
true
Ashish Vaswani and Noam Shazeer and Niki Parmar and Jakob Uszkoreit and Llion Jones and Aidan N. Gomez and Lukasz Kaiser and Illia Polosukhin
2,017
null
https://proceedings.neurips.cc/paper/2017/hash/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html
null
null
Attention Is All You Need
Attention Is All You Need
http://arxiv.org/pdf/1706.03762v7
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on at...
Don't Reinvent the Wheel: Efficient Instruction-Following Text Embedding based on Guided Space Transformation
2505.24754v1
devlin-etal-2019-bert
\cite{devlin-etal-2019-bert}
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
http://arxiv.org/abs/1810.04805v2
We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right contex...
true
true
Jacob Devlin and Ming{-}Wei Chang and Kenton Lee and Kristina Toutanova
2,019
null
https://doi.org/10.18653/v1/n19-1423
10.18653/V1/N19-1423
null
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
[PDF] BERT: Pre-training of Deep Bidirectional Transformers for Language ...
https://aclanthology.org/N19-1423.pdf
Unlike recent language repre-sentation models (Peters et al., 2018a; Rad-ford et al., 2018), BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a re-sult, the pre-trained BERT model can be fine-tuned with just one ...
Don't Reinvent the Wheel: Efficient Instruction-Following Text Embedding based on Guided Space Transformation
2505.24754v1
cer-etal-2018-universal
\cite{cer-etal-2018-universal}
Universal Sentence Encoder for English
null
null
true
false
Daniel Cer and Yinfei Yang and Sheng{-}yi Kong and Nan Hua and Nicole Limtiaco and Rhomni St. John and Noah Constant and Mario Guajardo{-}Cespedes and Steve Yuan and ...
2,018
null
https://doi.org/10.18653/v1/d18-2029
10.18653/V1/D18-2029
null
Universal Sentence Encoder for English
[1803.11175] Universal Sentence Encoder - arXiv
https://arxiv.org/abs/1803.11175
We present models for encoding sentences into embedding vectors that specifically target transfer learning to other NLP tasks.
Don't Reinvent the Wheel: Efficient Instruction-Following Text Embedding based on Guided Space Transformation
2505.24754v1
reimers-gurevych-2019-sentence
\cite{reimers-gurevych-2019-sentence}
Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
http://arxiv.org/abs/1908.10084v1
BERT (Devlin et al., 2018) and RoBERTa (Liu et al., 2019) has set a new state-of-the-art performance on sentence-pair regression tasks like semantic textual similarity (STS). However, it requires that both sentences are fed into the network, which causes a massive computational overhead: Finding the most similar pair i...
true
true
Nils Reimers and Iryna Gurevych
2,019
null
https://doi.org/10.18653/v1/D19-1410
10.18653/V1/D19-1410
null
Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
[PDF] Sentence Embeddings using Siamese BERT-Networks
https://aclanthology.org/D19-1410.pdf
c ⃝2019 Association for Computational Linguistics 3982 Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks Nils Reimers and Iryna Gurevych Ubiquitous Knowledge Processing Lab (UKP-TUDA) Department of Computer Science, Technische Universit¨ at Darmstadt www.ukp.tu-darmstadt.de Abstract BERT (Devlin et al., 20...
Don't Reinvent the Wheel: Efficient Instruction-Following Text Embedding based on Guided Space Transformation
2505.24754v1
gao-etal-2021-simcse
\cite{gao-etal-2021-simcse}
SimCSE: Simple Contrastive Learning of Sentence Embeddings
http://arxiv.org/abs/2104.08821v4
This paper presents SimCSE, a simple contrastive learning framework that greatly advances state-of-the-art sentence embeddings. We first describe an unsupervised approach, which takes an input sentence and predicts itself in a contrastive objective, with only standard dropout used as noise. This simple method works sur...
true
true
Tianyu Gao and Xingcheng Yao and Danqi Chen
2,021
null
https://doi.org/10.18653/v1/2021.emnlp-main.552
null
null
SimCSE: Simple Contrastive Learning of Sentence Embeddings
SimCSE: Simple Contrastive Learning of Sentence Embeddings
http://arxiv.org/pdf/2104.08821v4
This paper presents SimCSE, a simple contrastive learning framework that greatly advances state-of-the-art sentence embeddings. We first describe an unsupervised approach, which takes an input sentence and predicts itself in a contrastive objective, with only standard dropout used as noise. This simple method works sur...
Don't Reinvent the Wheel: Efficient Instruction-Following Text Embedding based on Guided Space Transformation
2505.24754v1
zhuo-etal-2023-whitenedcse
\cite{zhuo-etal-2023-whitenedcse}
WhitenedCSE: Whitening-based Contrastive Learning of Sentence Embeddings
null
null
true
false
Wenjie Zhuo and Yifan Sun and Xiaohan Wang and Linchao Zhu and Yi Yang
2,023
null
https://doi.org/10.18653/v1/2023.acl-long.677
10.18653/V1/2023.ACL-LONG.677
null
WhitenedCSE: Whitening-based Contrastive Learning of Sentence Embeddings
Whitening-based Contrastive Learning of Sentence Embeddings
https://aclanthology.org/2023.acl-long.677/
This paper presents a whitening-based contrastive learning method for sentence embedding learning (WhitenedCSE), which combines contrastive learning with a
Don't Reinvent the Wheel: Efficient Instruction-Following Text Embedding based on Guided Space Transformation
2505.24754v1
wang2023improving
\cite{wang2023improving}
Improving Text Embeddings with Large Language Models
http://arxiv.org/abs/2401.00368v3
In this paper, we introduce a novel and simple method for obtaining high-quality text embeddings using only synthetic data and less than 1k training steps. Unlike existing methods that often depend on multi-stage intermediate pre-training with billions of weakly-supervised text pairs, followed by fine-tuning with a few...
true
true
Wang, Liang and Yang, Nan and Huang, Xiaolong and Yang, Linjun and Majumder, Rangan and Wei, Furu
2,023
null
https://doi.org/10.48550/arXiv.2401.00368
null
arXiv
Improving Text Embeddings with Large Language Models
Improving Text Embeddings with Large Language Models
http://arxiv.org/pdf/2401.00368v3
In this paper, we introduce a novel and simple method for obtaining high-quality text embeddings using only synthetic data and less than 1k training steps. Unlike existing methods that often depend on multi-stage intermediate pre-training with billions of weakly-supervised text pairs, followed by fine-tuning with a few...
Don't Reinvent the Wheel: Efficient Instruction-Following Text Embedding based on Guided Space Transformation
2505.24754v1
muennighoff2024generative
\cite{muennighoff2024generative}
Generative Representational Instruction Tuning
http://arxiv.org/abs/2402.09906v3
All text-based language problems can be reduced to either generation or embedding. Current models only perform well at one or the other. We introduce generative representational instruction tuning (GRIT) whereby a large language model is trained to handle both generative and embedding tasks by distinguishing between th...
true
true
Niklas Muennighoff and Hongjin Su and Liang Wang and Nan Yang and Furu Wei and Tao Yu and Amanpreet Singh and Douwe Kiela
2,025
null
https://openreview.net/forum?id=BC4lIvfSzv
null
null
Generative Representational Instruction Tuning
Generative Representational Instruction Tuning
http://arxiv.org/pdf/2402.09906v3
All text-based language problems can be reduced to either generation or embedding. Current models only perform well at one or the other. We introduce generative representational instruction tuning (GRIT) whereby a large language model is trained to handle both generative and embedding tasks by distinguishing between th...
Don't Reinvent the Wheel: Efficient Instruction-Following Text Embedding based on Guided Space Transformation
2505.24754v1
lei-etal-2024-meta
\cite{lei-etal-2024-meta}
Meta-Task Prompting Elicits Embeddings from Large Language Models
http://arxiv.org/abs/2402.18458v2
We introduce a new unsupervised text embedding method, Meta-Task Prompting with Explicit One-Word Limitation (MetaEOL), for generating high-quality sentence embeddings from Large Language Models (LLMs) without the need for model fine-tuning. Leveraging meta-task prompting, MetaEOL guides LLMs to produce embeddings thro...
true
true
Yibin Lei and Di Wu and Tianyi Zhou and Tao Shen and Yu Cao and Chongyang Tao and Andrew Yates
2,024
null
https://doi.org/10.18653/v1/2024.acl-long.546
10.18653/V1/2024.ACL-LONG.546
null
Meta-Task Prompting Elicits Embeddings from Large Language Models
[PDF] Meta-Task Prompting Elicits Embeddings from Large Language ...
https://aclanthology.org/2024.acl-long.546.pdf
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 10141–10157 August 11-16, 2024 ©2024 Association for Computational Linguistics Meta-Task Prompting Elicits Embeddings from Large Language Models Yibin Lei1*, Di Wu1, Tianyi Zhou2, Tao Shen3, Yu Cao4, C...
Don't Reinvent the Wheel: Efficient Instruction-Following Text Embedding based on Guided Space Transformation
2505.24754v1
li-li-2024-aoe
\cite{li-li-2024-aoe}
AoE: Angle-optimized Embeddings for Semantic Textual Similarity
null
null
true
false
Xianming Li and Jing Li
2,024
null
https://doi.org/10.18653/v1/2024.acl-long.101
10.18653/V1/2024.ACL-LONG.101
null
AoE: Angle-optimized Embeddings for Semantic Textual Similarity
AoE: Angle-optimized Embeddings for Semantic Textual Similarity
https://aclanthology.org/2024.acl-long.101/
We propose a novel Angle-optimized Embedding model, AoE. It optimizes angle differences in complex space to explore similarity in saturation zones better.
Don't Reinvent the Wheel: Efficient Instruction-Following Text Embedding based on Guided Space Transformation
2505.24754v1
su-etal-2023-one
\cite{su-etal-2023-one}
One Embedder, Any Task: Instruction-Finetuned Text Embeddings
http://arxiv.org/abs/2212.09741v3
We introduce INSTRUCTOR, a new method for computing text embeddings given task instructions: every text input is embedded together with instructions explaining the use case (e.g., task and domain descriptions). Unlike encoders from prior work that are more specialized, INSTRUCTOR is a single embedder that can generate ...
true
true
Su, Hongjin and Shi, Weijia and Kasai, Jungo and Wang, Yizhong and Hu, Yushi and Ostendorf, Mari and Yih, Wen-tau and Smith, Noah A. and Zettlemoyer, Luke and Yu, Tao
2,023
null
https://aclanthology.org/2023.findings-acl.71/
null
null
One Embedder, Any Task: Instruction-Finetuned Text Embeddings
One Embedder, Any Task: Instruction-Finetuned Text Embeddings
https://aclanthology.org/2023.findings-acl.71/
Anthology ID:2023.findings-acl.71 Volume:Findings of the Association for Computational Linguistics: ACL 2023Month:July Year:2023 Address:Toronto, Canada Editors:Anna Rogers, Jordan Boyd-Graber, Naoaki OkazakiVenue:FindingsSIG:Publisher:Association for Computational Linguistics Note:Pages:1102–1121 Language:URL:https://...
Don't Reinvent the Wheel: Efficient Instruction-Following Text Embedding based on Guided Space Transformation
2505.24754v1
peng-etal-2024-answer
\cite{peng-etal-2024-answer}
Answer is All You Need: Instruction-following Text Embedding via Answering the Question
http://arxiv.org/abs/2402.09642v1
This work aims to build a text embedder that can capture characteristics of texts specified by user instructions. Despite its tremendous potential to deploy user-oriented embeddings, none of previous approaches provides a concrete solution for it. This paper offers a new viewpoint, which treats the instruction as a que...
true
true
Letian Peng and Yuwei Zhang and Zilong Wang and Jayanth Srinivasa and Gaowen Liu and Zihan Wang and Jingbo Shang
2,024
null
https://doi.org/10.18653/v1/2024.acl-long.27
10.18653/V1/2024.ACL-LONG.27
null
Answer is All You Need: Instruction-following Text Embedding via Answering the Question
Answer is All You Need: Instruction-following Text ...
https://aclanthology.org/2024.acl-long.27/
by L Peng · 2024 · Cited by 11 — This work aims to build a text embedder that can capture characteristics of texts specified by user instructions clarifying the similarity criterion.See more
Don't Reinvent the Wheel: Efficient Instruction-Following Text Embedding based on Guided Space Transformation
2505.24754v1
weller2024promptriever
\cite{weller2024promptriever}
Promptriever: Instruction-Trained Retrievers Can Be Prompted Like Language Models
http://arxiv.org/abs/2409.11136v1
Instruction-tuned language models (LM) are able to respond to imperative commands, providing a more natural user interface compared to their base counterparts. In this work, we present Promptriever, the first retrieval model able to be prompted like an LM. To train Promptriever, we curate and release a new instance-lev...
true
true
Orion Weller and Benjamin Van Durme and Dawn J. Lawrie and Ashwin Paranjape and Yuhao Zhang and Jack Hessel
2,025
null
https://openreview.net/forum?id=odvSjn416y
null
null
Promptriever: Instruction-Trained Retrievers Can Be Prompted Like Language Models
Promptriever: Instruction-Trained Retrievers Can Be ...
https://openreview.net/forum?id=odvSjn416y
by O Weller · Cited by 29 — This paper introduces Promptriever, a retrieval model that can be prompted like a language model. The authors construct an instance-level instruction training
Don't Reinvent the Wheel: Efficient Instruction-Following Text Embedding based on Guided Space Transformation
2505.24754v1
min2024unihgkr
\cite{min2024unihgkr}
UniHGKR: Unified Instruction-aware Heterogeneous Knowledge Retrievers
http://arxiv.org/abs/2410.20163v2
Existing information retrieval (IR) models often assume a homogeneous structure for knowledge sources and user queries, limiting their applicability in real-world settings where retrieval is inherently heterogeneous and diverse. In this paper, we introduce UniHGKR, a unified instruction-aware heterogeneous knowledge re...
true
true
Dehai Min and Zhiyang Xu and Guilin Qi and Lifu Huang and Chenyu You
2,025
null
https://aclanthology.org/2025.naacl-long.234/
null
null
UniHGKR: Unified Instruction-aware Heterogeneous Knowledge Retrievers
UniHGKR: Unified Instruction-aware Heterogeneous ...
https://arxiv.org/abs/2410.20163
by D Min · 2024 · Cited by 2 — In this paper, we introduce UniHGKR, a unified instruction-aware heterogeneous knowledge retriever that (1) builds a unified retrieval space for heterogeneous
Don't Reinvent the Wheel: Efficient Instruction-Following Text Embedding based on Guided Space Transformation
2505.24754v1
oh2024instructir
\cite{oh2024instructir}
INSTRUCTIR: A Benchmark for Instruction Following of Information Retrieval Models
http://arxiv.org/abs/2402.14334v1
Despite the critical need to align search targets with users' intention, retrievers often only prioritize query information without delving into the users' intended search context. Enhancing the capability of retrievers to understand intentions and preferences of users, akin to language model instructions, has the pote...
true
true
Hanseok Oh and Hyunji Lee and Seonghyeon Ye and Haebin Shin and Hansol Jang and Changwook Jun and Minjoon Seo
2,024
null
https://doi.org/10.48550/arXiv.2402.14334
10.48550/ARXIV.2402.14334
arXiv
INSTRUCTIR: A Benchmark for Instruction Following of Information Retrieval Models
InstructIR: A Benchmark for Instruction Following of ...
https://arxiv.org/html/2402.14334v1
Our approach focuses on user-aligned instructions tailored to each query instance, reflecting the diverse characteristics inherent in real-world search scenarios. Moreover, lack of benchmarks to evaluate retrievers on user-aligned scenarios prevents the mature discussions of instruction following in retrieval task. In ...
Don't Reinvent the Wheel: Efficient Instruction-Following Text Embedding based on Guided Space Transformation
2505.24754v1
sun2024mair
\cite{sun2024mair}
MAIR: A Massive Benchmark for Evaluating Instructed Retrieval
http://arxiv.org/abs/2410.10127v1
Recent information retrieval (IR) models are pre-trained and instruction-tuned on massive datasets and tasks, enabling them to perform well on a wide range of tasks and potentially generalize to unseen tasks with instructions. However, existing IR benchmarks focus on a limited scope of tasks, making them insufficient f...
true
true
Weiwei Sun and Zhengliang Shi and Wu Long and Lingyong Yan and Xinyu Ma and Yiding Liu and Min Cao and Dawei Yin and Zhaochun Ren
2,024
null
https://aclanthology.org/2024.emnlp-main.778
null
null
MAIR: A Massive Benchmark for Evaluating Instructed Retrieval
MAIR: A Massive Benchmark for Evaluating Instructed Retrieval
http://arxiv.org/pdf/2410.10127v1
Recent information retrieval (IR) models are pre-trained and instruction-tuned on massive datasets and tasks, enabling them to perform well on a wide range of tasks and potentially generalize to unseen tasks with instructions. However, existing IR benchmarks focus on a limited scope of tasks, making them insufficient f...
Don't Reinvent the Wheel: Efficient Instruction-Following Text Embedding based on Guided Space Transformation
2505.24754v1
weller2024followir
\cite{weller2024followir}
FollowIR: Evaluating and Teaching Information Retrieval Models to Follow Instructions
http://arxiv.org/abs/2403.15246v3
Modern Language Models (LMs) are capable of following long and complex instructions that enable a large and diverse set of user requests. While Information Retrieval (IR) models use these LMs as the backbone of their architectures, virtually none of them allow users to provide detailed instructions alongside queries, t...
true
true
Orion Weller and Benjamin Chang and Sean MacAvaney and Kyle Lo and Arman Cohan and Benjamin Van Durme and Dawn J. Lawrie and Luca Soldaini
2,025
null
https://aclanthology.org/2025.naacl-long.597/
null
null
FollowIR: Evaluating and Teaching Information Retrieval Models to Follow Instructions
FollowIR: Evaluating and Teaching Information Retrieval ...
https://arxiv.org/abs/2403.15246
by O Weller · 2024 · Cited by 43 — Through this process, we can measure how well IR models follow instructions, through a new pairwise evaluation framework. Our results indicate
NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization
2505.24575v1
ladhak-etal-2020-exploring
\cite{ladhak-etal-2020-exploring}
Exploring Content Selection in Summarization of Novel Chapters
http://arxiv.org/abs/2005.01840v3
We present a new summarization task, generating summaries of novel chapters using summary/chapter pairs from online study guides. This is a harder task than the news summarization task, given the chapter length as well as the extreme paraphrasing and generalization found in the summaries. We focus on extractive summari...
true
true
Ladhak, Faisal and Li, Bryan and Al-Onaizan, Yaser and McKeown, Kathleen
2,020
null
https://aclanthology.org/2020.acl-main.453/
10.18653/v1/2020.acl-main.453
null
Exploring Content Selection in Summarization of Novel Chapters
Exploring Content Selection in Summarization of Novel Chapters
http://arxiv.org/pdf/2005.01840v3
We present a new summarization task, generating summaries of novel chapters using summary/chapter pairs from online study guides. This is a harder task than the news summarization task, given the chapter length as well as the extreme paraphrasing and generalization found in the summaries. We focus on extractive summari...
NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization
2505.24575v1
pu-etal-2022-two
\cite{pu-etal-2022-two}
Two-Stage Movie Script Summarization: An Efficient Method For Low-Resource Long Document Summarization
null
null
true
false
Liu, Dongqi and Hong, Xudong and Lin, Pin-Jie and Chang, Ernie and Demberg, Vera
2,022
null
https://aclanthology.org/2022.creativesumm-1.9/
null
null
Two-Stage Movie Script Summarization: An Efficient Method For Low-Resource Long Document Summarization
Two-Stage Movie Script Summarization: An Efficient Method For ...
https://scispace.com/papers/two-stage-movie-script-summarization-an-efficient-method-for-2ca5vhpp
The core innovation in our model employs a two-stage hierarchical architecture for movie script summarization. In the first stage, a heuristic extraction method
NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization
2505.24575v1
gorinski-lapata-2015-movie
\cite{gorinski-lapata-2015-movie}
Movie Script Summarization as Graph-based Scene Extraction
null
null
true
false
Gorinski, Philip John and Lapata, Mirella
2,015
null
https://aclanthology.org/N15-1113/
10.3115/v1/N15-1113
null
Movie Script Summarization as Graph-based Scene Extraction
Movie Script Summarization As Graph-Based Scene Extraction | PDF
https://www.scribd.com/document/456741694/N15-1113
The document discusses summarizing movie scripts by extracting a chain of important scenes. It formalizes script summarization as finding an optimal scene chain
NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization
2505.24575v1
saxena-keller-2024-select
\cite{saxena-keller-2024-select}
Select and Summarize: Scene Saliency for Movie Script Summarization
http://arxiv.org/abs/2404.03561v1
Abstractive summarization for long-form narrative texts such as movie scripts is challenging due to the computational and memory constraints of current language models. A movie script typically comprises a large number of scenes; however, only a fraction of these scenes are salient, i.e., important for understanding th...
true
true
Saxena, Rohit and Keller, Frank
2,024
null
https://aclanthology.org/2024.findings-naacl.218/
10.18653/v1/2024.findings-naacl.218
null
Select and Summarize: Scene Saliency for Movie Script Summarization
Select and Summarize: Scene Saliency for Movie Script Summarization
http://arxiv.org/pdf/2404.03561v1
Abstractive summarization for long-form narrative texts such as movie scripts is challenging due to the computational and memory constraints of current language models. A movie script typically comprises a large number of scenes; however, only a fraction of these scenes are salient, i.e., important for understanding th...
NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization
2505.24575v1
zaheer2020bigbird
\cite{zaheer2020bigbird}
Big Bird: Transformers for Longer Sequences
http://arxiv.org/abs/2007.14062v2
Transformers-based models, such as BERT, have been one of the most successful deep learning models for NLP. Unfortunately, one of their core limitations is the quadratic dependency (mainly in terms of memory) on the sequence length due to their full attention mechanism. To remedy this, we propose, BigBird, a sparse att...
true
true
Zaheer, Manzil and Guruganesh, Guru and Dubey, Kumar Avinava and Ainslie, Joshua and Alberti, Chris and Ontanon, Santiago and Pham, Philip and Ravula, Anirudh and Wang, Qifan and Yang, Li and Ahmed, Amr
2,020
null
https://proceedings.neurips.cc/paper_files/paper/2020/file/c8512d142a2d849725f31a9a7a361ab9-Paper.pdf
null
null
Big Bird: Transformers for Longer Sequences
Big Bird: Transformers for Longer Sequences
http://arxiv.org/pdf/2007.14062v2
Transformers-based models, such as BERT, have been one of the most successful deep learning models for NLP. Unfortunately, one of their core limitations is the quadratic dependency (mainly in terms of memory) on the sequence length due to their full attention mechanism. To remedy this, we propose, BigBird, a sparse att...
NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization
2505.24575v1
Beltagy2020Longformer
\cite{Beltagy2020Longformer}
Longformer: The Long-Document Transformer
http://arxiv.org/abs/2004.05150v2
Transformer-based models are unable to process long sequences due to their self-attention operation, which scales quadratically with the sequence length. To address this limitation, we introduce the Longformer with an attention mechanism that scales linearly with sequence length, making it easy to process documents of ...
true
true
Iz Beltagy and Matthew E. Peters and Arman Cohan
2,020
null
https://arxiv.org/abs/2004.05150
null
null
Longformer: The Long-Document Transformer
[PDF] Longformer: The Long-Document Transformer
https://ysu1989.github.io/courses/au20/cse5539/Longformer.pdf
Longformer: The Long-Document Transformer Beltagy et al., 2020 Presented by Leslie Zhou Background ◦Transformers: have achieved state-of-the-art results in a wide range of natural language tasks including generative language modeling and discriminative language understanding. (2019)) ◦Classification (IMDB and Hyperpart...
NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization
2505.24575v1
kitaev2020reformerefficienttransformer
\cite{kitaev2020reformerefficienttransformer}
Reformer: The Efficient Transformer
http://arxiv.org/abs/2001.04451v2
Large Transformer models routinely achieve state-of-the-art results on a number of tasks but training these models can be prohibitively costly, especially on long sequences. We introduce two techniques to improve the efficiency of Transformers. For one, we replace dot-product attention by one that uses locality-sensiti...
true
true
Nikita Kitaev and Łukasz Kaiser and Anselm Levskaya
2,020
null
https://arxiv.org/abs/2001.04451
null
null
Reformer: The Efficient Transformer
Reformer: The Efficient Transformer
http://arxiv.org/pdf/2001.04451v2
Large Transformer models routinely achieve state-of-the-art results on a number of tasks but training these models can be prohibitively costly, especially on long sequences. We introduce two techniques to improve the efficiency of Transformers. For one, we replace dot-product attention by one that uses locality-sensiti...
NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization
2505.24575v1
guo-etal-2022-longt5
\cite{guo-etal-2022-longt5}
{L}ong{T}5: {E}fficient Text-To-Text Transformer for Long Sequences
null
null
true
false
Guo, Mandy and Ainslie, Joshua and Uthus, David and Ontanon, Santiago and Ni, Jianmo and Sung, Yun-Hsuan and Yang, Yinfei
2,022
null
https://aclanthology.org/2022.findings-naacl.55/
10.18653/v1/2022.findings-naacl.55
null
{L}ong{T}5: {E}fficient Text-To-Text Transformer for Long Sequences
LongT5: Efficient Text-To-Text Transformer for Long Sequences
https://aclanthology.org/2022.findings-naacl.55/
In this paper, we present LongT5, a new model that explores the effects of scaling both the input length and model size at the same time.
NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization
2505.24575v1
wang2020linformerselfattentionlinearcomplexity
\cite{wang2020linformerselfattentionlinearcomplexity}
Linformer: Self-Attention with Linear Complexity
http://arxiv.org/abs/2006.04768v3
Large transformer models have shown extraordinary success in achieving state-of-the-art results in many natural language processing applications. However, training and deploying these models can be prohibitively costly for long sequences, as the standard self-attention mechanism of the Transformer uses $O(n^2)$ time an...
true
true
Sinong Wang and Belinda Z. Li and Madian Khabsa and Han Fang and Hao Ma
2,020
null
https://arxiv.org/abs/2006.04768
null
null
Linformer: Self-Attention with Linear Complexity
[2006.04768] Linformer: Self-Attention with Linear Complexity
https://arxiv.org/abs/2006.04768
by S Wang · 2020 · Cited by 2185 — A new self-attention mechanism, which reduces the overall self-attention complexity from O(n^2) to O(n) in both time and space.
NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization
2505.24575v1
chen2023extendingcontextwindowlarge
\cite{chen2023extendingcontextwindowlarge}
Extending Context Window of Large Language Models via Positional Interpolation
http://arxiv.org/abs/2306.15595v2
We present Position Interpolation (PI) that extends the context window sizes of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal fine-tuning (within 1000 steps), while demonstrating strong empirical results on various tasks that require long context, including passkey retrieval, language mode...
true
true
Shouyuan Chen and Sherman Wong and Liangjian Chen and Yuandong Tian
2,023
null
https://arxiv.org/abs/2306.15595
null
null
Extending Context Window of Large Language Models via Positional Interpolation
Extending Context Window of Large Language Models via ... - arXiv
https://arxiv.org/abs/2306.15595
We present Position Interpolation (PI) that extends the context window sizes of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal
NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization
2505.24575v1
gpt4_technical
\cite{gpt4_technical}
GPT-4 Technical Report
null
null
true
false
OpenAI
2,023
null
null
null
arXiv preprint arXiv:2303.08774
GPT-4 Technical Report
GPT-4 Technical Report
http://arxiv.org/pdf/2303.08774v6
We report the development of GPT-4, a large-scale, multimodal model which can accept image and text inputs and produce text outputs. While less capable than humans in many real-world scenarios, GPT-4 exhibits human-level performance on various professional and academic benchmarks, including passing a simulated bar exam...
NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization
2505.24575v1
mistralai2024large
\cite{mistralai2024large}
Large Enough
null
null
true
false
{Mistral AI}
2,024
null
https://mistral.ai/news/mistral-large-2407/
null
null
Large Enough
is large enough | Meaning, Grammar Guide & Usage Examples
https://ludwig.guru/s/is+large+enough
"is large enough" is correct and usable in written English. You can use it when you need to express that an object, quantity, or area of space is greater than
NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization
2505.24575v1
liu-etal-2024-lost
\cite{liu-etal-2024-lost}
Lost in the Middle: How Language Models Use Long Contexts
http://arxiv.org/abs/2307.03172v3
While recent language models have the ability to take long contexts as input, relatively little is known about how well they use longer context. We analyze the performance of language models on two tasks that require identifying relevant information in their input contexts: multi-document question answering and key-val...
true
true
Liu, Nelson F. and Lin, Kevin and Hewitt, John and Paranjape, Ashwin and Bevilacqua, Michele and Petroni, Fabio and Liang, Percy
2,024
null
https://aclanthology.org/2024.tacl-1.9/
10.1162/tacl_a_00638
Transactions of the Association for Computational Linguistics
Lost in the Middle: How Language Models Use Long Contexts
Lost in the Middle: How Language Models Use Long Contexts
http://arxiv.org/pdf/2307.03172v3
While recent language models have the ability to take long contexts as input, relatively little is known about how well they use longer context. We analyze the performance of language models on two tasks that require identifying relevant information in their input contexts: multi-document question answering and key-val...
NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization
2505.24575v1
ivgi-etal-2023-sled
\cite{ivgi-etal-2023-sled}
Efficient Long-Text Understanding with Short-Text Models
http://arxiv.org/abs/2208.00748v3
Transformer-based pretrained language models (LMs) are ubiquitous across natural language understanding, but cannot be applied to long sequences such as stories, scientific articles and long documents, due to their quadratic complexity. While a myriad of efficient transformer variants have been proposed, they are typic...
true
true
Ivgi, Maor and Shaham, Uri and Berant, Jonathan
2,023
null
https://aclanthology.org/2023.tacl-1.17/
10.1162/tacl_a_00547
Transactions of the Association for Computational Linguistics
Efficient Long-Text Understanding with Short-Text Models
Efficient Long-Text Understanding with Short-Text Models
https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00547/115346/Efficient-Long-Text-Understanding-with-Short-Text
In this work we present SLED, a simple approach for modeling long texts that slides a pretrained short-range encoder over a long input document
NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization
2505.24575v1
bertsch2023unlimiformer
\cite{bertsch2023unlimiformer}
Unlimiformer: Long-Range Transformers with Unlimited Length Input
http://arxiv.org/abs/2305.01625v3
Since the proposal of transformers, these models have been limited to bounded input lengths, because of their need to attend to every token in the input. In this work, we propose Unlimiformer: a general approach that wraps any existing pretrained encoder-decoder transformer, and offloads the cross-attention computation...
true
true
Amanda Bertsch and Uri Alon and Graham Neubig and Matthew R. Gormley
2,023
null
https://openreview.net/forum?id=lJWUJWLCJo
null
null
Unlimiformer: Long-Range Transformers with Unlimited Length Input
Public repo for the NeurIPS 2023 paper "Unlimiformer
https://github.com/abertsch72/unlimiformer
Unlimiformer: Long-Range Transformers with Unlimited Length Input (NeurIPS 2023) ... Unlimiformer is a method for augmenting pretrained encoder-decoder models
NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization
2505.24575v1
saxena2025endtoendlongdocumentsummarization
\cite{saxena2025endtoendlongdocumentsummarization}
End-to-End Long Document Summarization using Gradient Caching
http://arxiv.org/abs/2501.01805v2
Training transformer-based encoder-decoder models for long document summarization poses a significant challenge due to the quadratic memory consumption during training. Several approaches have been proposed to extend the input length at test time, but training with these approaches is still difficult, requiring truncat...
true
true
Rohit Saxena and Hao Tang and Frank Keller
2,025
null
https://arxiv.org/abs/2501.01805
null
null
End-to-End Long Document Summarization using Gradient Caching
[Literature Review] End-to-End Long Document ...
https://www.themoonlight.io/en/review/end-to-end-long-document-summarization-using-gradient-caching
This page provides the most accurate and concise summary worldwide for the paper titled End-to-End Long Document Summarization using Gradient Caching. With
NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization
2505.24575v1
zhang2024chain
\cite{zhang2024chain}
Chain of Agents: Large Language Models Collaborating on Long-Context Tasks
http://arxiv.org/abs/2406.02818v1
Addressing the challenge of effectively processing long contexts has become a critical issue for Large Language Models (LLMs). Two common strategies have emerged: 1) reducing the input length, such as retrieving relevant chunks by Retrieval-Augmented Generation (RAG), and 2) expanding the context window limit of LLMs. ...
true
true
Yusen Zhang and Ruoxi Sun and Yanfei Chen and Tomas Pfister and Rui Zhang and Sercan O Arik
2,024
null
https://openreview.net/forum?id=LuCLf4BJsr
null
null
Chain of Agents: Large Language Models Collaborating on Long-Context Tasks
Chain of Agents: Large Language Models Collaborating ...
https://arxiv.org/abs/2406.02818
View Jobs Skip to main content arXiv Is Hiring a DevOps Engineer View Jobs We gratefully acknowledge support from the Simons Foundation, member institutions, and all contributors.Donate >cs> arXiv:2406.02818 Help | Advanced Search Search GO quick links Login Help Pages About Computer Science > Computation and Language ...
NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization
2505.24575v1
chang2024booookscore
\cite{chang2024booookscore}
BooookScore: A systematic exploration of book-length summarization in the era of LLMs
http://arxiv.org/abs/2310.00785v4
Summarizing book-length documents (>100K tokens) that exceed the context window size of large language models (LLMs) requires first breaking the input document into smaller chunks and then prompting an LLM to merge, update, and compress chunk-level summaries. Despite the complexity and importance of this task, it has y...
true
true
Yapei Chang and Kyle Lo and Tanya Goyal and Mohit Iyyer
2,024
null
https://openreview.net/forum?id=7Ttk3RzDeu
null
null
BooookScore: A systematic exploration of book-length summarization in the era of LLMs
lilakk/BooookScore - GitHub
https://github.com/lilakk/BooookScore
Official package for our ICLR 2024 paper, "BooookScore: A systematic exploration of book-length summarization in the era of LLMs". arxiv.org/abs/2310.00785
NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization
2505.24575v1
jeong2025agentasjudgefactualsummarizationlong
\cite{jeong2025agentasjudgefactualsummarizationlong}
Agent-as-Judge for Factual Summarization of Long Narratives
http://arxiv.org/abs/2501.09993v1
Large Language Models (LLMs) have demonstrated near-human performance in summarization tasks based on traditional metrics such as ROUGE and BERTScore. However, these metrics do not adequately capture critical aspects of summarization quality, such as factual accuracy, particularly for long narratives (>100K tokens). Re...
true
true
Yeonseok Jeong and Minsoo Kim and Seung-won Hwang and Byung-Hak Kim
2,025
null
https://arxiv.org/abs/2501.09993
null
null
Agent-as-Judge for Factual Summarization of Long Narratives
YeonseokJeong/NarrativeFactScore: Agent-as-Judge for ...
https://github.com/YeonseokJeong/NarrativeFactScore
NarrativeFactScore is a novel "Agent-as-a-Judge" framework for evaluating and refining summaries of long narratives. The framework provides factual
NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization
2505.24575v1
NEURIPS2020_rag
\cite{NEURIPS2020_rag}
Advances in Neural Information Processing Systems 33, NeurIPS 2020
null
null
true
false
Lewis, Patrick and Perez, Ethan and Piktus, Aleksandra and Petroni, Fabio and Karpukhin, Vladimir and Goyal, Naman and K\"{u}ttler, Heinrich and Lewis, Mike and Yih, Wen-tau and Rockt\"{a}schel, Tim and Riedel, Sebastian and Kiela, Douwe
2,020
null
https://proceedings.neurips.cc/paper_files/paper/2020/file/6b493230205f780e1bc26945df7481e5-Paper.pdf
null
null
Advances in Neural Information Processing Systems 33, NeurIPS 2020
Book - NIPS
https://papers.nips.cc/paper/2020
Advances in Neural Information Processing Systems 33 (NeurIPS 2020) ; A graph similarity for deep learning Seongmin Ok ; An Unsupervised Information-Theoretic
NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization
2505.24575v1
geng-etal-2022-improving-abstractive
\cite{geng-etal-2022-improving-abstractive}
Improving Abstractive Dialogue Summarization with Speaker-Aware Supervised Contrastive Learning
null
null
true
false
Geng, Zhichao and Zhong, Ming and Yin, Zhangyue and Qiu, Xipeng and Huang, Xuanjing
2,022
null
https://aclanthology.org/2022.coling-1.569/
null
null
Improving Abstractive Dialogue Summarization with Speaker-Aware Supervised Contrastive Learning
Improving Abstractive Dialogue Summarization with ...
https://aclanthology.org/2022.coling-1.569.pdf
by Z Geng · 2022 · Cited by 12 — We propose three speaker-aware su- pervised contrastive learning tasks: Token-level. SCL, Turn-level SCL, and Global-level SCL. By jointly
NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization
2505.24575v1
uthus-ni-2023-rise
\cite{uthus-ni-2023-rise}
RISE: Leveraging Retrieval Techniques for Summarization Evaluation
http://arxiv.org/abs/2212.08775v2
Evaluating automatically-generated text summaries is a challenging task. While there have been many interesting approaches, they still fall short of human evaluations. We present RISE, a new approach for evaluating summaries by leveraging techniques from information retrieval. RISE is first trained as a retrieval task ...
true
true
Uthus, David and Ni, Jianmo
2,023
null
https://aclanthology.org/2023.findings-acl.865/
10.18653/v1/2023.findings-acl.865
null
RISE: Leveraging Retrieval Techniques for Summarization Evaluation
RISE: Leveraging Retrieval Techniques for Summarization Evaluation
http://arxiv.org/pdf/2212.08775v2
Evaluating automatically-generated text summaries is a challenging task. While there have been many interesting approaches, they still fall short of human evaluations. We present RISE, a new approach for evaluating summaries by leveraging techniques from information retrieval. RISE is first trained as a retrieval task ...
COSMIC: Generalized Refusal Direction Identification in LLM Activations
2506.00085v1
ouyang2022traininglanguagemodelsfollow
\cite{ouyang2022traininglanguagemodelsfollow}
Training language models to follow instructions with human feedback
null
null
true
false
Long Ouyang and Jeffrey Wu and Xu Jiang and Diogo Almeida and Carroll L. Wainwright and Pamela Mishkin and Chong Zhang and Sandhini Agarwal and Katarina Slama and Alex Ray and John Schulman and Jacob Hilton and Fraser Kelton and Luke Miller and Maddie Simens and Amanda Askell and Peter Welinder and Paul F. Christiano a...
2,022
null
http://papers.nips.cc/paper\_files/paper/2022/hash/b1efde53be364a73914f58805a001731-Abstract-Conference.html
null
null
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
http://arxiv.org/pdf/2203.02155v1
Making language models bigger does not inherently make them better at following a user's intent. For example, large language models can generate outputs that are untruthful, toxic, or simply not helpful to the user. In other words, these models are not aligned with their users. In this paper, we show an avenue for alig...
COSMIC: Generalized Refusal Direction Identification in LLM Activations
2506.00085v1
bai2022traininghelpfulharmlessassistant
\cite{bai2022traininghelpfulharmlessassistant}
Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback
http://arxiv.org/abs/2204.05862v1
We apply preference modeling and reinforcement learning from human feedback (RLHF) to finetune language models to act as helpful and harmless assistants. We find this alignment training improves performance on almost all NLP evaluations, and is fully compatible with training for specialized skills such as python coding...
true
true
Yuntao Bai and Andy Jones and Kamal Ndousse and Amanda Askell and Anna Chen and Nova DasSarma and Dawn Drain and Stanislav Fort and Deep Ganguli and Tom Henighan and Nicholas Joseph and Saurav Kadavath and Jackson Kernion and Tom Conerly and Sheer El-Showk and Nelson Elhage and Zac Hatfield-Dodds and Danny Hernandez an...
2,022
null
https://arxiv.org/abs/2204.05862
null
ArXiv preprint
Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback
Training a Helpful and Harmless Assistant with Reinforcement ...
https://arxiv.org/abs/2204.05862
[2204.05862] Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback Title:Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback View a PDF of the paper titled Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback,...
COSMIC: Generalized Refusal Direction Identification in LLM Activations
2506.00085v1
ganguli2022redteaminglanguagemodels
\cite{ganguli2022redteaminglanguagemodels}
Red Teaming Language Models to Reduce Harms: Methods, Scaling Behaviors, and Lessons Learned
http://arxiv.org/abs/2209.07858v2
We describe our early efforts to red team language models in order to simultaneously discover, measure, and attempt to reduce their potentially harmful outputs. We make three main contributions. First, we investigate scaling behaviors for red teaming across 3 model sizes (2.7B, 13B, and 52B parameters) and 4 model type...
true
true
Deep Ganguli and Liane Lovitt and Jackson Kernion and Amanda Askell and Yuntao Bai and Saurav Kadavath and Ben Mann and Ethan Perez and Nicholas Schiefer and Kamal Ndousse and Andy Jones and Sam Bowman and Anna Chen and Tom Conerly and Nova DasSarma and Dawn Drain and Nelson Elhage and Sheer El-Showk and Stanislav Fort...
2,022
null
https://arxiv.org/abs/2209.07858
null
ArXiv preprint
Red Teaming Language Models to Reduce Harms: Methods, Scaling Behaviors, and Lessons Learned
(PDF) Red Teaming Language Models to Reduce Harms
https://www.researchgate.net/publication/363651560_Red_Teaming_Language_Models_to_Reduce_Harms_Methods_Scaling_Behaviors_and_Lessons_Learned
Red Teaming Language Models to Reduce Harms: Methods, Scaling Behaviors, and Lessons Learned. August 2022. DOI:10.48550/arXiv.2209.07858.
COSMIC: Generalized Refusal Direction Identification in LLM Activations
2506.00085v1
lermen2024lorafinetuningefficientlyundoes
\cite{lermen2024lorafinetuningefficientlyundoes}
LoRA Fine-tuning Efficiently Undoes Safety Training in Llama 2-Chat 70B
http://arxiv.org/abs/2310.20624v2
AI developers often apply safety alignment procedures to prevent the misuse of their AI systems. For example, before Meta released Llama 2-Chat - a collection of instruction fine-tuned large language models - they invested heavily in safety training, incorporating extensive red-teaming and reinforcement learning from h...
true
true
Simon Lermen and Charlie Rogers-Smith and Jeffrey Ladish
2,023
null
https://arxiv.org/abs/2310.20624
null
ArXiv preprint
LoRA Fine-tuning Efficiently Undoes Safety Training in Llama 2-Chat 70B
Paper page - LoRA Fine-tuning Efficiently Undoes Safety ...
https://huggingface.co/papers/2310.20624
We achieve a refusal rate below 1% for our 70B Llama 2-Chat model on two refusal benchmarks. Our fine-tuning method retains general performance,
COSMIC: Generalized Refusal Direction Identification in LLM Activations
2506.00085v1
yang2023shadowalignmenteasesubverting
\cite{yang2023shadowalignmenteasesubverting}
Shadow Alignment: The Ease of Subverting Safely-Aligned Language Models
http://arxiv.org/abs/2310.02949v1
Warning: This paper contains examples of harmful language, and reader discretion is recommended. The increasing open release of powerful large language models (LLMs) has facilitated the development of downstream applications by reducing the essential cost of data annotation and computation. To ensure AI safety, extensi...
true
true
Xianjun Yang and Xiao Wang and Qi Zhang and Linda Petzold and William Yang Wang and Xun Zhao and Dahua Lin
2,023
null
https://arxiv.org/abs/2310.02949
null
ArXiv preprint
Shadow Alignment: The Ease of Subverting Safely-Aligned Language Models
The Ease of Subverting Safely-Aligned Language Models
https://openreview.net/forum?id=rg0vQmkB7F
The paper identifies a new attack, termed "Shadow Alignment", that undermines the safety measures of large language models (LLMs) with minimal
COSMIC: Generalized Refusal Direction Identification in LLM Activations
2506.00085v1
qi2023finetuningalignedlanguagemodels
\cite{qi2023finetuningalignedlanguagemodels}
Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!
null
null
true
false
Xiangyu Qi and Yi Zeng and Tinghao Xie and Pin{-}Yu Chen and Ruoxi Jia and Prateek Mittal and Peter Henderson
2,024
null
https://openreview.net/forum?id=hTEGyKf0dZ
null
null
Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!
Fine-tuning Aligned Language Models Compromises ...
https://openreview.net/forum?id=Xaf289hqmZ
por X Qi · 2024 · Mencionado por 717 — Fine-tuning aligned language models compromises safety, even when users do not intend to! Open Webpage Xiangyu Qi, Yi Zeng, Tinghao Xie, Pin-Yu Chen, Ruoxi Jia
COSMIC: Generalized Refusal Direction Identification in LLM Activations
2506.00085v1
andriushchenko2024jailbreaking
\cite{andriushchenko2024jailbreaking}
Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks
http://arxiv.org/abs/2404.02151v4
We show that even the most recent safety-aligned LLMs are not robust to simple adaptive jailbreaking attacks. First, we demonstrate how to successfully leverage access to logprobs for jailbreaking: we initially design an adversarial prompt template (sometimes adapted to the target LLM), and then we apply random search ...
true
true
Andriushchenko, Maksym and Croce, Francesco and Flammarion, Nicolas
2,024
null
https://arxiv.org/abs/2404.02151
null
ArXiv preprint
Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks
Jailbreaking Leading Safety-Aligned LLMs with Simple ...
https://openreview.net/forum?id=hXA8wqRdyV
by M Andriushchenko · Cited by 229 — This paper proposes an adaptive jailbreaking attack, which aims at attacking safety-aligned language models (LLMs), demonstrating that even the latest models
COSMIC: Generalized Refusal Direction Identification in LLM Activations
2506.00085v1
zou2023universaltransferableadversarialattacks
\cite{zou2023universaltransferableadversarialattacks}
Universal and Transferable Adversarial Attacks on Aligned Language Models
null
null
true
false
Andy Zou and Zifan Wang and Nicholas Carlini and Milad Nasr and J. Zico Kolter and Matt Fredrikson
2,023
null
https://arxiv.org/abs/2307.15043
null
ArXiv preprint
Universal and Transferable Adversarial Attacks on Aligned Language Models
Universal and Transferable Adversarial Attacks on Aligned Language Models
http://arxiv.org/pdf/2307.15043v2
Because "out-of-the-box" large language models are capable of generating a great deal of objectionable content, recent work has focused on aligning these models in an attempt to prevent undesirable generation. While there has been some success at circumventing these measures -- so-called "jailbreaks" against LLMs -- th...
COSMIC: Generalized Refusal Direction Identification in LLM Activations
2506.00085v1
chao2024jailbreakingblackboxlarge
\cite{chao2024jailbreakingblackboxlarge}
Jailbreaking Black Box Large Language Models in Twenty Queries
null
null
true
false
Patrick Chao and Alexander Robey and Edgar Dobriban and Hamed Hassani and George J. Pappas and Eric Wong
2,023
null
https://arxiv.org/abs/2310.08419
null
ArXiv preprint
Jailbreaking Black Box Large Language Models in Twenty Queries
Jailbreaking Black Box Large Language Models in Twenty Queries
http://arxiv.org/pdf/2310.08419v4
There is growing interest in ensuring that large language models (LLMs) align with human values. However, the alignment of such models is vulnerable to adversarial jailbreaks, which coax LLMs into overriding their safety guardrails. The identification of these vulnerabilities is therefore instrumental in understanding ...
COSMIC: Generalized Refusal Direction Identification in LLM Activations
2506.00085v1
weidinger2021ethicalsocialrisksharm
\cite{weidinger2021ethicalsocialrisksharm}
Ethical and social risks of harm from Language Models
http://arxiv.org/abs/2112.04359v1
This paper aims to help structure the risk landscape associated with large-scale Language Models (LMs). In order to foster advances in responsible innovation, an in-depth understanding of the potential risks posed by these models is needed. A wide range of established and anticipated risks are analysed in detail, drawi...
true
true
Laura Weidinger and John Mellor and Maribeth Rauh and Conor Griffin and Jonathan Uesato and Po-Sen Huang and Myra Cheng and Mia Glaese and Borja Balle and Atoosa Kasirzadeh and Zac Kenton and Sasha Brown and Will Hawkins and Tom Stepleton and Courtney Biles and Abeba Birhane and Julia Haas and Laura Rimell and Lisa Ann...
2,021
null
https://arxiv.org/abs/2112.04359
null
ArXiv preprint
Ethical and social risks of harm from Language Models
Ethical and social risks of harm from Language Models
http://arxiv.org/pdf/2112.04359v1
This paper aims to help structure the risk landscape associated with large-scale Language Models (LMs). In order to foster advances in responsible innovation, an in-depth understanding of the potential risks posed by these models is needed. A wide range of established and anticipated risks are analysed in detail, drawi...
COSMIC: Generalized Refusal Direction Identification in LLM Activations
2506.00085v1
arditi2024refusallanguagemodelsmediated
\cite{arditi2024refusallanguagemodelsmediated}
Refusal in Language Models Is Mediated by a Single Direction
http://arxiv.org/abs/2406.11717v3
Conversational large language models are fine-tuned for both instruction-following and safety, resulting in models that obey benign requests but refuse harmful ones. While this refusal behavior is widespread across chat models, its underlying mechanisms remain poorly understood. In this work, we show that refusal is me...
true
true
Andy Arditi and Oscar Obeso and Aaquib Syed and Daniel Paleka and Nina Panickssery and Wes Gurnee and Neel Nanda
2,024
null
http://papers.nips.cc/paper\_files/paper/2024/hash/f545448535dfde4f9786555403ab7c49-Abstract-Conference.html
null
null
Refusal in Language Models Is Mediated by a Single Direction
Refusal in Language Models Is Mediated by a Single Direction
http://arxiv.org/pdf/2406.11717v3
Conversational large language models are fine-tuned for both instruction-following and safety, resulting in models that obey benign requests but refuse harmful ones. While this refusal behavior is widespread across chat models, its underlying mechanisms remain poorly understood. In this work, we show that refusal is me...
COSMIC: Generalized Refusal Direction Identification in LLM Activations
2506.00085v1
marshall2024refusalllmsaffinefunction
\cite{marshall2024refusalllmsaffinefunction}
Refusal in LLMs is an Affine Function
http://arxiv.org/abs/2411.09003v3
We propose affine concept editing (ACE) as an approach for steering language models' behavior by intervening directly in activations. We begin with an affine decomposition of model activation vectors and show that prior methods for steering model behavior correspond to subsets of terms of this decomposition. We then pr...
true
true
Thomas Marshall and Adam Scherlis and Nora Belrose
2,024
null
https://arxiv.org/abs/2411.09003
null
ArXiv preprint
Refusal in LLMs is an Affine Function
Refusal in LLMs is an Affine Function
http://arxiv.org/pdf/2411.09003v3
We propose affine concept editing (ACE) as an approach for steering language models' behavior by intervening directly in activations. We begin with an affine decomposition of model activation vectors and show that prior methods for steering model behavior correspond to subsets of terms of this decomposition. We then pr...
COSMIC: Generalized Refusal Direction Identification in LLM Activations
2506.00085v1
zou2023representationengineeringtopdownapproach
\cite{zou2023representationengineeringtopdownapproach}
Representation Engineering: A Top-Down Approach to AI Transparency
http://arxiv.org/abs/2310.01405v4
In this paper, we identify and characterize the emerging area of representation engineering (RepE), an approach to enhancing the transparency of AI systems that draws on insights from cognitive neuroscience. RepE places population-level representations, rather than neurons or circuits, at the center of analysis, equipp...
true
true
Andy Zou and Long Phan and Sarah Chen and James Campbell and Phillip Guo and Richard Ren and Alexander Pan and Xuwang Yin and Mantas Mazeika and Ann-Kathrin Dombrowski and Shashwat Goel and Nathaniel Li and Michael J. Byun and Zifan Wang and Alex Mallen and Steven Basart and Sanmi Koyejo and Dawn Song and Matt Fredriks...
2,023
null
https://arxiv.org/abs/2310.01405
null
ArXiv preprint
Representation Engineering: A Top-Down Approach to AI Transparency
Representation Engineering: A Top-Down Approach to AI ...
https://montrealethics.ai/representation-engineering-a-top-down-approach-to-ai-transparency/
RepE is a top-down approach to transparency research that treats representations as the fundamental unit of analysis, aiming to understand and control
COSMIC: Generalized Refusal Direction Identification in LLM Activations
2506.00085v1
Spectralediting
\cite{Spectralediting}
Spectral Editing of Activations for Large Language Model Alignment
http://arxiv.org/abs/2405.09719v3
Large language models (LLMs) often exhibit undesirable behaviours, such as generating untruthful or biased content. Editing their internal representations has been shown to be effective in mitigating such behaviours on top of the existing alignment methods. We propose a novel inference-time editing method, namely spect...
true
true
Yifu Qiu and Zheng Zhao and Yftah Ziser and Anna Korhonen and Edoardo Maria Ponti and Shay B. Cohen
2,024
null
http://papers.nips.cc/paper\_files/paper/2024/hash/684c59d614fe6ae74a3be8c3ef07e061-Abstract-Conference.html
null
null
Spectral Editing of Activations for Large Language Model Alignment
Spectral Editing of Activations for Large Language Model Alignment
http://arxiv.org/pdf/2405.09719v3
Large language models (LLMs) often exhibit undesirable behaviours, such as generating untruthful or biased content. Editing their internal representations has been shown to be effective in mitigating such behaviours on top of the existing alignment methods. We propose a novel inference-time editing method, namely spect...
COSMIC: Generalized Refusal Direction Identification in LLM Activations
2506.00085v1
bhattacharjee2024inferencetimecategorywisesafetysteering
\cite{bhattacharjee2024inferencetimecategorywisesafetysteering}
Towards Inference-time Category-wise Safety Steering for Large Language Models
http://arxiv.org/abs/2410.01174v1
While large language models (LLMs) have seen unprecedented advancements in capabilities and applications across a variety of use-cases, safety alignment of these models is still an area of active research. The fragile nature of LLMs, even models that have undergone extensive alignment and safety training regimes, warra...
true
true
Amrita Bhattacharjee and Shaona Ghosh and Traian Rebedea and Christopher Parisien
2,024
null
https://arxiv.org/abs/2410.01174
null
ArXiv preprint
Towards Inference-time Category-wise Safety Steering for Large Language Models
Towards Inference-time Category-wise Safety Steering for Large...
https://openreview.net/forum?id=EkQRNLPFcn
We propose and explore an inference-time safety steering method for LLMs by intervening using category-specific steering vectors computed using model
COSMIC: Generalized Refusal Direction Identification in LLM Activations
2506.00085v1
uppaal2025profs
\cite{uppaal2025profs}
Model Editing as a Robust and Denoised variant of DPO: A Case Study on Toxicity
http://arxiv.org/abs/2405.13967v5
Recent alignment algorithms such as direct preference optimization (DPO) have been developed to improve the safety of large language models (LLMs) by training these models to match human behaviors exemplified by preference data. However, these methods are both computationally intensive and lacking in controllability an...
true
true
Uppaal, Rheeya and Dey, Apratim and He, Yiting and Zhong, Yiqiao and Hu, Junjie
2,025
null
null
null
null
Model Editing as a Robust and Denoised variant of DPO: A Case Study on Toxicity
Rheeya Uppaal - Google Scholar
https://scholar.google.com/citations?user=nx3vmEkAAAAJ&hl=en
DeTox: Toxic Subspace Projection for Model Editing. R Uppaal, A De ... 2019. Model editing as a robust and denoised variant of dpo: A case study on toxicity.
COSMIC: Generalized Refusal Direction Identification in LLM Activations
2506.00085v1
burns2024discoveringlatentknowledgelanguage
\cite{burns2024discoveringlatentknowledgelanguage}
Discovering Latent Knowledge in Language Models Without Supervision
http://arxiv.org/abs/2212.03827v2
Existing techniques for training language models can be misaligned with the truth: if we train models with imitation learning, they may reproduce errors that humans make; if we train them to generate text that humans rate highly, they may output errors that human evaluators can't detect. We propose circumventing this i...
true
true
Collin Burns and Haotian Ye and Dan Klein and Jacob Steinhardt
2,023
null
https://openreview.net/pdf?id=ETKGuby0hcs
null
null
Discovering Latent Knowledge in Language Models Without Supervision
Discovering Latent Knowledge in Language Models Without Supervision
http://arxiv.org/pdf/2212.03827v2
Existing techniques for training language models can be misaligned with the truth: if we train models with imitation learning, they may reproduce errors that humans make; if we train them to generate text that humans rate highly, they may output errors that human evaluators can't detect. We propose circumventing this i...
COSMIC: Generalized Refusal Direction Identification in LLM Activations
2506.00085v1
panickssery2024steeringllama2contrastive
\cite{panickssery2024steeringllama2contrastive}
Steering Llama 2 via Contrastive Activation Addition
http://arxiv.org/abs/2312.06681v4
We introduce Contrastive Activation Addition (CAA), an innovative method for steering language models by modifying their activations during forward passes. CAA computes "steering vectors" by averaging the difference in residual stream activations between pairs of positive and negative examples of a particular behavior,...
true
true
Nina Panickssery and Nick Gabrieli and Julian Schulz and Meg Tong and Evan Hubinger and Alexander Matt Turner
2,023
null
https://arxiv.org/abs/2312.06681
null
ArXiv preprint
Steering Llama 2 via Contrastive Activation Addition
Steering Llama 2 via Contrastive Activation Addition
http://arxiv.org/pdf/2312.06681v4
We introduce Contrastive Activation Addition (CAA), an innovative method for steering language models by modifying their activations during forward passes. CAA computes "steering vectors" by averaging the difference in residual stream activations between pairs of positive and negative examples of a particular behavior,...
COSMIC: Generalized Refusal Direction Identification in LLM Activations
2506.00085v1
turner2024steeringlanguagemodelsactivation
\cite{turner2024steeringlanguagemodelsactivation}
Steering Language Models With Activation Engineering
http://arxiv.org/abs/2308.10248v5
Prompt engineering and finetuning aim to maximize language model performance on a given metric (like toxicity reduction). However, these methods do not fully elicit a model's capabilities. To reduce this gap, we introduce activation engineering: the inference-time modification of activations in order to control (or ste...
true
true
Alexander Matt Turner and Lisa Thiergart and Gavin Leech and David Udell and Juan J. Vazquez and Ulisse Mini and Monte MacDiarmid
2,023
null
https://arxiv.org/abs/2308.10248
null
ArXiv preprint
Steering Language Models With Activation Engineering
Steering Language Models With Activation Engineering
http://arxiv.org/pdf/2308.10248v5
Prompt engineering and finetuning aim to maximize language model performance on a given metric (like toxicity reduction). However, these methods do not fully elicit a model's capabilities. To reduce this gap, we introduce activation engineering: the inference-time modification of activations in order to control (or ste...
COSMIC: Generalized Refusal Direction Identification in LLM Activations
2506.00085v1
lee2025programmingrefusalconditionalactivation
\cite{lee2025programmingrefusalconditionalactivation}
Programming Refusal with Conditional Activation Steering
http://arxiv.org/abs/2409.05907v3
LLMs have shown remarkable capabilities, but precisely controlling their response behavior remains challenging. Existing activation steering methods alter LLM behavior indiscriminately, limiting their practical applicability in settings where selective responses are essential, such as content moderation or domain-speci...
true
true
Bruce W. Lee and Inkit Padhi and Karthikeyan Natesan Ramamurthy and Erik Miehling and Pierre Dognin and Manish Nagireddy and Amit Dhurandhar
2,024
null
https://arxiv.org/abs/2409.05907
null
ArXiv preprint
Programming Refusal with Conditional Activation Steering
Programming Refusal with Conditional Activation Steering
http://arxiv.org/pdf/2409.05907v3
LLMs have shown remarkable capabilities, but precisely controlling their response behavior remains challenging. Existing activation steering methods alter LLM behavior indiscriminately, limiting their practical applicability in settings where selective responses are essential, such as content moderation or domain-speci...
COSMIC: Generalized Refusal Direction Identification in LLM Activations
2506.00085v1
guerner2024geometricnotioncausalprobing
\cite{guerner2024geometricnotioncausalprobing}
A Geometric Notion of Causal Probing
http://arxiv.org/abs/2307.15054v4
The linear subspace hypothesis (Bolukbasi et al., 2016) states that, in a language model's representation space, all information about a concept such as verbal number is encoded in a linear subspace. Prior work has relied on auxiliary classification tasks to identify and evaluate candidate subspaces that might give sup...
true
true
Clément Guerner and Anej Svete and Tianyu Liu and Alexander Warstadt and Ryan Cotterell
2,023
null
https://arxiv.org/abs/2307.15054
null
ArXiv preprint
A Geometric Notion of Causal Probing
A Geometric Notion of Causal Probing
http://arxiv.org/pdf/2307.15054v4
The linear subspace hypothesis (Bolukbasi et al., 2016) states that, in a language model's representation space, all information about a concept such as verbal number is encoded in a linear subspace. Prior work has relied on auxiliary classification tasks to identify and evaluate candidate subspaces that might give sup...
COSMIC: Generalized Refusal Direction Identification in LLM Activations
2506.00085v1
haghighatkhah2022betterhitnailhead
\cite{haghighatkhah2022betterhitnailhead}
Better Hit the Nail on the Head than Beat around the Bush: Removing Protected Attributes with a Single Projection
http://arxiv.org/abs/2212.04273v1
Bias elimination and recent probing studies attempt to remove specific information from embedding spaces. Here it is important to remove as much of the target information as possible, while preserving any other information present. INLP is a popular recent method which removes specific information through iterative nul...
true
true
Haghighatkhah, Pantea and Fokkens, Antske and Sommerauer, Pia and Speckmann, Bettina and Verbeek, Kevin
2,022
null
https://aclanthology.org/2022.emnlp-main.575
10.18653/v1/2022.emnlp-main.575
null
Better Hit the Nail on the Head than Beat around the Bush: Removing Protected Attributes with a Single Projection
Better Hit the Nail on the Head than Beat around the Bush
https://www.researchgate.net/publication/366135893_Better_Hit_the_Nail_on_the_Head_than_Beat_around_the_Bush_Removing_Protected_Attributes_with_a_Single_Projection
Our comparison between MP and INLP shows that (1) one MP projection removes linear separability based on the target and (2) MP has less impact
COSMIC: Generalized Refusal Direction Identification in LLM Activations
2506.00085v1
ravfogel2020nulloutguardingprotected
\cite{ravfogel2020nulloutguardingprotected}
Null It Out: Guarding Protected Attributes by Iterative Nullspace Projection
http://arxiv.org/abs/2004.07667v2
The ability to control for the kinds of information encoded in neural representation has a variety of use cases, especially in light of the challenge of interpreting these models. We present Iterative Null-space Projection (INLP), a novel method for removing information from neural representations. Our method is based ...
true
true
Ravfogel, Shauli and Elazar, Yanai and Gonen, Hila and Twiton, Michael and Goldberg, Yoav
2,020
null
https://aclanthology.org/2020.acl-main.647
10.18653/v1/2020.acl-main.647
null
Null It Out: Guarding Protected Attributes by Iterative Nullspace Projection
Shauli Ravfogel - Google Scholar
https://scholar.google.co.il/citations?user=x09r-T8AAAAJ&hl=en
Null it out: Guarding protected attributes by iterative nullspace projection. S Ravfogel, Y Elazar, H Gonen, M Twiton, Y Goldberg. Proceedings of the 58th
COSMIC: Generalized Refusal Direction Identification in LLM Activations
2506.00085v1
belrose2023leaceperfectlinearconcept
\cite{belrose2023leaceperfectlinearconcept}
LEACE: Perfect linear concept erasure in closed form
http://arxiv.org/abs/2306.03819v4
Concept erasure aims to remove specified features from an embedding. It can improve fairness (e.g. preventing a classifier from using gender or race) and interpretability (e.g. removing a concept to observe changes in model behavior). We introduce LEAst-squares Concept Erasure (LEACE), a closed-form method which provab...
true
true
Nora Belrose and David Schneider{-}Joseph and Shauli Ravfogel and Ryan Cotterell and Edward Raff and Stella Biderman
2,023
null
http://papers.nips.cc/paper\_files/paper/2023/hash/d066d21c619d0a78c5b557fa3291a8f4-Abstract-Conference.html
null
null
LEACE: Perfect linear concept erasure in closed form
LEACE: Perfect linear concept erasure in closed form
http://arxiv.org/pdf/2306.03819v4
Concept erasure aims to remove specified features from an embedding. It can improve fairness (e.g. preventing a classifier from using gender or race) and interpretability (e.g. removing a concept to observe changes in model behavior). We introduce LEAst-squares Concept Erasure (LEACE), a closed-form method which provab...
COSMIC: Generalized Refusal Direction Identification in LLM Activations
2506.00085v1
wang2024trojanactivationattackredteaming
\cite{wang2024trojanactivationattackredteaming}
Trojan Activation Attack: Red-Teaming Large Language Models using Activation Steering for Safety-Alignment
http://arxiv.org/abs/2311.09433v3
To ensure AI safety, instruction-tuned Large Language Models (LLMs) are specifically trained to ensure alignment, which refers to making models behave in accordance with human intentions. While these models have demonstrated commendable results on various safety benchmarks, the vulnerability of their safety alignment h...
true
true
Haoran Wang and Kai Shu
2,023
null
https://arxiv.org/abs/2311.09433
null
ArXiv preprint
Trojan Activation Attack: Red-Teaming Large Language Models using Activation Steering for Safety-Alignment
Trojan Activation Attack: Red-Teaming Large Language Models ...
https://arxiv.org/html/2311.09433v3
Trojan Activation Attack: Red-Teaming Large Language Models using Activation Steering for Safety-Alignment Trojan Activation Attack: Red-Teaming Large Language Models using Activation Steering for Safety-Alignment Large Language Models (LLMs) are generally trained on massive text corpora scraped from the web (Touvron e...
COSMIC: Generalized Refusal Direction Identification in LLM Activations
2506.00085v1
bolukbasi2016man
\cite{bolukbasi2016man}
Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings
http://arxiv.org/abs/1607.06520v1
The blind application of machine learning runs the risk of amplifying biases present in data. Such a danger is facing us with word embedding, a popular framework to represent text data as vectors which has been used in many machine learning and natural language processing tasks. We show that even word embeddings traine...
true
true
Tolga Bolukbasi and Kai{-}Wei Chang and James Y. Zou and Venkatesh Saligrama and Adam Tauman Kalai
2,016
null
https://proceedings.neurips.cc/paper/2016/hash/a486cd07e4ac3d270571622f4f316ec5-Abstract.html
null
null
Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings
Tolga Bolukbasi - Google Scholar
https://scholar.google.com/citations?user=3rF9gtAAAAAJ&hl=en
Man is to Computer Programmer as Woman is to Homemaker. T Bolukbasi, KW Chang, J Zou, V Saligrama, A Kalai. Debiasing word embeddings 29, 2016. 240, 2016.
COSMIC: Generalized Refusal Direction Identification in LLM Activations
2506.00085v1
elhage2022toymodelssuperposition
\cite{elhage2022toymodelssuperposition}
Toy Models of Superposition
http://arxiv.org/abs/2209.10652v1
Neural networks often pack many unrelated concepts into a single neuron - a puzzling phenomenon known as 'polysemanticity' which makes interpretability much more challenging. This paper provides a toy model where polysemanticity can be fully understood, arising as a result of models storing additional sparse features i...
true
true
Nelson Elhage and Tristan Hume and Catherine Olsson and Nicholas Schiefer and Tom Henighan and Shauna Kravec and Zac Hatfield-Dodds and Robert Lasenby and Dawn Drain and Carol Chen and Roger Grosse and Sam McCandlish and Jared Kaplan and Dario Amodei and Martin Wattenberg and Christopher Olah
2,022
null
https://arxiv.org/abs/2209.10652
null
ArXiv preprint
Toy Models of Superposition
Toy Models of Superposition
http://arxiv.org/pdf/2209.10652v1
Neural networks often pack many unrelated concepts into a single neuron - a puzzling phenomenon known as 'polysemanticity' which makes interpretability much more challenging. This paper provides a toy model where polysemanticity can be fully understood, arising as a result of models storing additional sparse features i...
COSMIC: Generalized Refusal Direction Identification in LLM Activations
2506.00085v1
park2024linearrepresentationhypothesisgeometry
\cite{park2024linearrepresentationhypothesisgeometry}
The Linear Representation Hypothesis and the Geometry of Large Language Models
http://arxiv.org/abs/2311.03658v2
Informally, the 'linear representation hypothesis' is the idea that high-level concepts are represented linearly as directions in some representation space. In this paper, we address two closely related questions: What does "linear representation" actually mean? And, how do we make sense of geometric notions (e.g., cos...
true
true
Kiho Park and Yo Joong Choe and Victor Veitch
2,024
null
https://openreview.net/forum?id=UGpGkLzwpP
null
null
The Linear Representation Hypothesis and the Geometry of Large Language Models
NeurIPS The Linear Representation Hypothesis in Language Models
https://neurips.cc/virtual/2023/77537
In the context of large language models, the "linear representation hypothesis" is the idea that high-level concepts are represented linearly as directions
COSMIC: Generalized Refusal Direction Identification in LLM Activations
2506.00085v1
mikolov2013linguistic
\cite{mikolov2013linguistic}
Linguistic Regularities in Continuous Space Word Representations
null
null
true
false
Mikolov, Tomas and Yih, Wen-tau and Zweig, Geoffrey
2,013
null
https://aclanthology.org/N13-1090
null
null
Linguistic Regularities in Continuous Space Word Representations
arXiv:1806.07978v1 [cs.LG] 20 Jun 2018
https://arxiv.org/pdf/1806.07978
by T Eichinger · 2018 · Cited by 1 — Mikolov, W. Yih, and G. Zweig, “Linguistic regularities in continuous space word representations.” in HLT-NAACL, 2013, pp. 746–
COSMIC: Generalized Refusal Direction Identification in LLM Activations
2506.00085v1
nanda2023emergentlinearrepresentationsworld
\cite{nanda2023emergentlinearrepresentationsworld}
Emergent Linear Representations in World Models of Self-Supervised Sequence Models
http://arxiv.org/abs/2309.00941v2
How do sequence models represent their decision-making process? Prior work suggests that Othello-playing neural network learned nonlinear models of the board state (Li et al., 2023). In this work, we provide evidence of a closely related linear representation of the board. In particular, we show that probing for "my co...
true
true
Nanda, Neel and Lee, Andrew and Wattenberg, Martin
2,023
null
https://aclanthology.org/2023.blackboxnlp-1.2
10.18653/v1/2023.blackboxnlp-1.2
null
Emergent Linear Representations in World Models of Self-Supervised Sequence Models
Emergent Linear Representations in World Models of Self- ...
https://huggingface.co/papers/2309.00941
Sequence models use linear representations to interpret their decision-making processes in games like Othello, allowing for control of model
COSMIC: Generalized Refusal Direction Identification in LLM Activations
2506.00085v1
hernandez2021lowdimensionallineargeometrycontextualized
\cite{hernandez2021lowdimensionallineargeometrycontextualized}
The Low-Dimensional Linear Geometry of Contextualized Word Representations
http://arxiv.org/abs/2105.07109v2
Black-box probing models can reliably extract linguistic features like tense, number, and syntactic role from pretrained word representations. However, the manner in which these features are encoded in representations remains poorly understood. We present a systematic study of the linear geometry of contextualized word...
true
true
Hernandez, Evan and Andreas, Jacob
2,021
null
https://aclanthology.org/2021.conll-1.7
10.18653/v1/2021.conll-1.7
null
The Low-Dimensional Linear Geometry of Contextualized Word Representations
Evan Hernandez - Google Scholar
https://scholar.google.com/citations?user=38EC20cAAAAJ&hl=en
The low-dimensional linear geometry of contextualized word representations. E Hernandez, J Andreas. arXiv preprint arXiv:2105.07109, 2021. 50, 2021. A
COSMIC: Generalized Refusal Direction Identification in LLM Activations
2506.00085v1
bricken2023monosemanticity
\cite{bricken2023monosemanticity}
Towards Monosemanticity: Decomposing Language Models With Dictionary Learning
null
null
true
false
Bricken, Trenton and Templeton, Adly and Batson, Joshua and Chen, Brian and Jermyn, Adam and Conerly, Tom and Turner, Nick and Anil, Cem and Denison, Carson and Askell, Amanda and Lasenby, Robert and Wu, Yifan and Kravec, Shauna and Schiefer, Nicholas and Maxwell, Tim and Joseph, Nicholas and Hatfield-Dodds, Zac and Ta...
2,023
null
null
null
Transformer Circuits Thread
Towards Monosemanticity: Decomposing Language Models With Dictionary Learning
Decomposing Language Models With Dictionary Learning
https://www.anthropic.com/research/towards-monosemanticity-decomposing-language-models-with-dictionary-learning
In our latest paper, Towards Monosemanticity: Decomposing Language Models With Dictionary Learning, we outline evidence that there are better units of analysis
COSMIC: Generalized Refusal Direction Identification in LLM Activations
2506.00085v1
templeton2024scaling
\cite{templeton2024scaling}
Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet
null
null
true
false
Templeton, Adly and Conerly, Tom and Marcus, Jonathan and Lindsey, Jack and Bricken, Trenton and Chen, Brian and Pearce, Adam and Citro, Craig and Ameisen, Emmanuel and Jones, Andy and Cunningham, Hoagy and Turner, Nicholas L and McDougall, Callum and MacDiarmid, Monte and Freeman, C. Daniel and Sumers, Theodore R. and...
2,024
null
https://transformer-circuits.pub/2024/scaling-monosemanticity/index.html
null
Transformer Circuits Thread
Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet
arXiv:2406.17969v2 [cs.CL] 15 Oct 2024
https://arxiv.org/pdf/2406.17969
by H Yan · 2024 · Cited by 8 — Scaling monosemanticity: Extracting interpretable · features from claude 3 sonnet. Transformer Circuits. Thread. Hugo Touvron, Thibaut Lavril
COSMIC: Generalized Refusal Direction Identification in LLM Activations
2506.00085v1
cunningham2023sparseautoencodershighlyinterpretable
\cite{cunningham2023sparseautoencodershighlyinterpretable}
Sparse Autoencoders Find Highly Interpretable Features in Language Models
http://arxiv.org/abs/2309.08600v3
One of the roadblocks to a better understanding of neural networks' internals is \textit{polysemanticity}, where neurons appear to activate in multiple, semantically distinct contexts. Polysemanticity prevents us from identifying concise, human-understandable explanations for what neural networks are doing internally. ...
true
true
Robert Huben and Hoagy Cunningham and Logan Riggs and Aidan Ewart and Lee Sharkey
2,024
null
https://openreview.net/forum?id=F76bwRSLeK
null
null
Sparse Autoencoders Find Highly Interpretable Features in Language Models
Sparse Autoencoders Find Highly Interpretable Features in ...
https://openreview.net/forum?id=F76bwRSLeK
This paper proposes using sparse autoencoders to learn interpretable and monosemantic features from the internal activations of language models. This paper presents a way to make the individual features of Large Language Models more interpretable by learning simple autoencoders with activation sparsity. On the original...
COSMIC: Generalized Refusal Direction Identification in LLM Activations
2506.00085v1
pearce2024bilinearmlpsenableweightbased
\cite{pearce2024bilinearmlpsenableweightbased}
Bilinear MLPs enable weight-based mechanistic interpretability
http://arxiv.org/abs/2410.08417v2
A mechanistic understanding of how MLPs do computation in deep neural networks remains elusive. Current interpretability work can extract features from hidden activations over an input dataset but generally cannot explain how MLP weights construct features. One challenge is that element-wise nonlinearities introduce hi...
true
true
Michael T. Pearce and Thomas Dooms and Alice Rigg and Jose M. Oramas and Lee Sharkey
2,024
null
https://arxiv.org/abs/2410.08417
null
ArXiv preprint
Bilinear MLPs enable weight-based mechanistic interpretability
Bilinear MLPs enable weight-based mechanistic ...
https://openreview.net/forum?id=gI0kPklUKS
by MT Pearce · Cited by 2 — The close-to-linear structure of bilinear MLPs enables weight-based analysis that reveals interpretable low rank structure across multiple modalities.
COSMIC: Generalized Refusal Direction Identification in LLM Activations
2506.00085v1
elhage2021mathematical
\cite{elhage2021mathematical}
A Mathematical Framework for Transformer Circuits
null
null
true
false
Elhage, Nelson and Nanda, Neel and Olsson, Catherine and Henighan, Tom and Joseph, Nicholas and Mann, Ben and Askell, Amanda and Bai, Yuntao and Chen, Anna and Conerly, Tom and DasSarma, Nova and Drain, Dawn and Ganguli, Deep and Hatfield-Dodds, Zac and Hernandez, Danny and Jones, Andy and Kernion, Jackson and Lovitt, ...
2,021
null
null
null
Transformer Circuits Thread
A Mathematical Framework for Transformer Circuits
A Walkthrough of A Mathematical Framework for ...
https://www.neelnanda.io/mechanistic-interpretability/a-walkthrough-of-a-mathematical-framework-for-transformer-circuits
A Mathematical Framework for Transformer Circuits is, in my opinion, the coolest paper I've ever had the privilege of working on.
COSMIC: Generalized Refusal Direction Identification in LLM Activations
2506.00085v1
lieberum2023doescircuitanalysisinterpretability
\cite{lieberum2023doescircuitanalysisinterpretability}
Does Circuit Analysis Interpretability Scale? Evidence from Multiple Choice Capabilities in Chinchilla
http://arxiv.org/abs/2307.09458v3
\emph{Circuit analysis} is a promising technique for understanding the internal mechanisms of language models. However, existing analyses are done in small models far from the state of the art. To address this, we present a case study of circuit analysis in the 70B Chinchilla model, aiming to test the scalability of ci...
true
true
Tom Lieberum and Matthew Rahtz and János Kramár and Neel Nanda and Geoffrey Irving and Rohin Shah and Vladimir Mikulik
2,023
null
https://arxiv.org/abs/2307.09458
null
ArXiv preprint
Does Circuit Analysis Interpretability Scale? Evidence from Multiple Choice Capabilities in Chinchilla
Does Circuit Analysis Interpretability Scale? Evidence from Multiple ...
https://arxiv.org/abs/2307.09458
Missing: 04/08/2025
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs
2505.23996v1
liang2022holistic
\cite{liang2022holistic}
Holistic Evaluation of Language Models
http://arxiv.org/abs/2211.09110v2
Language models (LMs) are becoming the foundation for almost all major language technologies, but their capabilities, limitations, and risks are not well understood. We present Holistic Evaluation of Language Models (HELM) to improve the transparency of language models. First, we taxonomize the vast space of potential ...
true
true
Liang, Percy and Bommasani, Rishi and Lee, Tony and Tsipras, Dimitris and Soylu, Dilara and Yasunaga, Michihiro and Zhang, Yian and Narayanan, Deepak and Wu, Yuhuai and Kumar, Ananya and others
2,022
null
null
null
arXiv preprint arXiv:2211.09110
Holistic Evaluation of Language Models
Holistic Evaluation of Language Models
http://arxiv.org/pdf/2211.09110v2
Language models (LMs) are becoming the foundation for almost all major language technologies, but their capabilities, limitations, and risks are not well understood. We present Holistic Evaluation of Language Models (HELM) to improve the transparency of language models. First, we taxonomize the vast space of potential ...
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs
2505.23996v1
hendrycks2020measuring
\cite{hendrycks2020measuring}
Measuring Massive Multitask Language Understanding
http://arxiv.org/abs/2009.03300v3
We propose a new test to measure a text model's multitask accuracy. The test covers 57 tasks including elementary mathematics, US history, computer science, law, and more. To attain high accuracy on this test, models must possess extensive world knowledge and problem solving ability. We find that while most recent mode...
true
true
Hendrycks, Dan and Burns, Collin and Basart, Steven and Zou, Andy and Mazeika, Mantas and Song, Dawn and Steinhardt, Jacob
2,021
null
null
null
null
Measuring Massive Multitask Language Understanding
Measuring Massive Multitask Language Understanding
http://arxiv.org/pdf/2009.03300v3
We propose a new test to measure a text model's multitask accuracy. The test covers 57 tasks including elementary mathematics, US history, computer science, law, and more. To attain high accuracy on this test, models must possess extensive world knowledge and problem solving ability. We find that while most recent mode...
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs
2505.23996v1
open-llm-leaderboard-v2
\cite{open-llm-leaderboard-v2}
Open LLM Leaderboard v2
null
null
true
false
Clémentine Fourrier and Nathan Habib and Alina Lozovskaya and Konrad Szafer and Thomas Wolf
2,024
null
null
null
null
Open LLM Leaderboard v2
Hugging Face Upgrades Open LLM Leaderboard v2 for ... - InfoQ
https://www.infoq.com/news/2024/10/open-llm-leaderboard-v2-launch/
Scaling Large Language Model Serving Infrastructure at Meta/presentations/llm-meta/en/smallimage/ye-charlotte-qi-thumbnail-1747727365712.jpg) She explains how traditional product management principles remain crucial while highlighting the nuances of working with LLMs. Learn about prompt engineering, data-driven develop...
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs
2505.23996v1
blodgett-etal-2020-language
\cite{blodgett-etal-2020-language}
Language (Technology) is Power: A Critical Survey of "Bias" in NLP
http://arxiv.org/abs/2005.14050v2
We survey 146 papers analyzing "bias" in NLP systems, finding that their motivations are often vague, inconsistent, and lacking in normative reasoning, despite the fact that analyzing "bias" is an inherently normative process. We further find that these papers' proposed quantitative techniques for measuring or mitigati...
true
true
Blodgett, Su Lin and Barocas, Solon and Daum{\'e} III, Hal and Wallach, Hanna
2,020
null
null
null
null
Language (Technology) is Power: A Critical Survey of "Bias" in NLP
Language (Technology) is Power: A Critical Survey of "Bias" in NLP
http://arxiv.org/pdf/2005.14050v2
We survey 146 papers analyzing "bias" in NLP systems, finding that their motivations are often vague, inconsistent, and lacking in normative reasoning, despite the fact that analyzing "bias" is an inherently normative process. We further find that these papers' proposed quantitative techniques for measuring or mitigati...
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs
2505.23996v1
yang2024assessing
\cite{yang2024assessing}
Assessing Adversarial Robustness of Large Language Models: An Empirical Study
http://arxiv.org/abs/2405.02764v2
Large Language Models (LLMs) have revolutionized natural language processing, but their robustness against adversarial attacks remains a critical concern. We presents a novel white-box style attack approach that exposes vulnerabilities in leading open-source LLMs, including Llama, OPT, and T5. We assess the impact of m...
true
true
Yang, Zeyu and Meng, Zhao and Zheng, Xiaochen and Wattenhofer, Roger
2,024
null
null
null
null
Assessing Adversarial Robustness of Large Language Models: An Empirical Study
[PDF] Assessing Adversarial Robustness of Large Language Models
https://genai-evaluation-kdd2024.github.io/genai-evalution-kdd2024/assets/papers/GenAI_Evaluation_KDD2024_paper_24.pdf
In this paper, we present an extensive study of three leading open- source LLMs: Llama, OPT, and T5. We evaluate the robustness of various sizes
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs
2505.23996v1
hartvigsen2022toxigen
\cite{hartvigsen2022toxigen}
ToxiGen: A Large-Scale Machine-Generated Dataset for Adversarial and Implicit Hate Speech Detection
http://arxiv.org/abs/2203.09509v4
Toxic language detection systems often falsely flag text that contains minority group mentions as toxic, as those groups are often the targets of online hate. Such over-reliance on spurious correlations also causes systems to struggle with detecting implicitly toxic language. To help mitigate these issues, we create To...
true
true
Hartvigsen, Thomas and Gabriel, Saadia and Palangi, Hamid and Sap, Maarten and Ray, Dipankar and Kamar, Ece
2,022
null
null
null
null
ToxiGen: A Large-Scale Machine-Generated Dataset for Adversarial and Implicit Hate Speech Detection
ToxiGen: A Large-Scale Machine-Generated Dataset for Adversarial ...
https://www.researchgate.net/publication/361059047_ToxiGen_A_Large-Scale_Machine-Generated_Dataset_for_Adversarial_and_Implicit_Hate_Speech_Detection
Toxigen is a large-scale dataset featuring over 270K machine-generated toxic and benign statements about 13 minority groups, specifically designed to expose
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs
2505.23996v1
magooda2023framework
\cite{magooda2023framework}
A Framework for Automated Measurement of Responsible AI Harms in Generative AI Applications
http://arxiv.org/abs/2310.17750v1
We present a framework for the automated measurement of responsible AI (RAI) metrics for large language models (LLMs) and associated products and services. Our framework for automatically measuring harms from LLMs builds on existing technical and sociotechnical expertise and leverages the capabilities of state-of-the-a...
true
true
Magooda, Ahmed and Helyar, Alec and Jackson, Kyle and Sullivan, David and Atalla, Chad and Sheng, Emily and Vann, Dan and Edgar, Richard and Palangi, Hamid and Lutz, Roman and others
2,023
null
null
null
arXiv preprint arXiv:2310.17750
A Framework for Automated Measurement of Responsible AI Harms in Generative AI Applications
A Framework for Automated Measurement of Responsible ...
https://www.microsoft.com/en-us/research/publication/a-framework-for-automated-measurement-of-responsible-ai-harms-in-generative-ai-applications/?locale=zh-cn
We present a framework for the automated measurement of responsible AI (RAI) metrics for large language models (LLMs) and associated products
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs
2505.23996v1
li2023survey
\cite{li2023survey}
A Survey on Fairness in Large Language Models
http://arxiv.org/abs/2308.10149v2
Large Language Models (LLMs) have shown powerful performance and development prospects and are widely deployed in the real world. However, LLMs can capture social biases from unprocessed training data and propagate the biases to downstream tasks. Unfair LLM systems have undesirable social impacts and potential harms. I...
true
true
Li, Yingji and Du, Mengnan and Song, Rui and Wang, Xin and Wang, Ying
2,023
null
null
null
arXiv preprint arXiv:2308.10149
A Survey on Fairness in Large Language Models
A Survey on Fairness in Large Language Models
http://arxiv.org/pdf/2308.10149v2
Large Language Models (LLMs) have shown powerful performance and development prospects and are widely deployed in the real world. However, LLMs can capture social biases from unprocessed training data and propagate the biases to downstream tasks. Unfair LLM systems have undesirable social impacts and potential harms. I...
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs
2505.23996v1
mackraz2024evaluating
\cite{mackraz2024evaluating}
Evaluating Gender Bias Transfer between Pre-trained and Prompt-Adapted Language Models
http://arxiv.org/abs/2412.03537v1
Large language models (LLMs) are increasingly being adapted to achieve task-specificity for deployment in real-world decision systems. Several previous works have investigated the bias transfer hypothesis (BTH) by studying the effect of the fine-tuning adaptation strategy on model fairness to find that fairness in pre-...
true
true
Mackraz, Natalie and Sivakumar, Nivedha and Khorshidi, Samira and Patel, Krishna and Theobald, Barry-John and Zappella, Luca and Apostoloff, Nicholas
2,024
null
null
null
arXiv preprint arXiv:2412.03537
Evaluating Gender Bias Transfer between Pre-trained and Prompt-Adapted Language Models
Evaluating Gender Bias Transfer between Pre-trained and Prompt ...
https://openreview.net/forum?id=HyN9POiYhN
The primary purpose of this research is to understand if intrinsic bias in pre-trained models can transfer to downstream tasks upon prompting, to gain
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs
2505.23996v1
patel2024fairness
\cite{patel2024fairness}
Fairness Dynamics During Training
http://arxiv.org/abs/2506.01709v1
We investigate fairness dynamics during Large Language Model (LLM) training to enable the diagnoses of biases and mitigations through training interventions like early stopping; we find that biases can emerge suddenly and do not always follow common performance metrics. We introduce two new metrics to evaluate fairness...
true
true
Patel, Krishna and Sivakumar, Nivedha and Theobald, Barry-John and Zappella, Luca and Apostoloff, Nicholas
null
null
null
null
Neurips Evaluating Evaluations: Examining Best Practices for Measuring Broader Impacts of Generative AI Workshop 2024
Fairness Dynamics During Training
Fairness Dynamics During Training
http://arxiv.org/pdf/2506.01709v1
We investigate fairness dynamics during Large Language Model (LLM) training to enable the diagnoses of biases and mitigations through training interventions like early stopping; we find that biases can emerge suddenly and do not always follow common performance metrics. We introduce two new metrics to evaluate fairness...
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs
2505.23996v1
laskar2023systematic
\cite{laskar2023systematic}
A Systematic Study and Comprehensive Evaluation of ChatGPT on Benchmark Datasets
http://arxiv.org/abs/2305.18486v4
The development of large language models (LLMs) such as ChatGPT has brought a lot of attention recently. However, their evaluation in the benchmark academic datasets remains under-explored due to the difficulty of evaluating the generative outputs produced by this model against the ground truth. In this paper, we aim t...
true
true
Laskar, Md Tahmid Rahman and Bari, M Saiful and Rahman, Mizanur and Bhuiyan, Md Amran Hossen and Joty, Shafiq and Huang, Jimmy Xiangji
2,023
null
null
null
null
A Systematic Study and Comprehensive Evaluation of ChatGPT on Benchmark Datasets
A Systematic Study and Comprehensive Evaluation of ChatGPT on ...
https://arxiv.org/abs/2305.18486
Image 2: arxiv logo>cs> arXiv:2305.18486 **arXiv:2305.18486** (cs) View a PDF of the paper titled A Systematic Study and Comprehensive Evaluation of ChatGPT on Benchmark Datasets, by Md Tahmid Rahman Laskar and 5 other authors View a PDF of the paper titled A Systematic Study and Comprehensive Evaluation of ChatGPT o...
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs
2505.23996v1
chu2024fairness
\cite{chu2024fairness}
Fairness in Large Language Models: A Taxonomic Survey
http://arxiv.org/abs/2404.01349v2
Large Language Models (LLMs) have demonstrated remarkable success across various domains. However, despite their promising performance in numerous real-world applications, most of these algorithms lack fairness considerations. Consequently, they may lead to discriminatory outcomes against certain communities, particula...
true
true
Chu, Zhibo and Wang, Zichong and Zhang, Wenbin
2,024
null
null
null
ACM SIGKDD explorations newsletter
Fairness in Large Language Models: A Taxonomic Survey
Fairness in Large Language Models: A Taxonomic Survey
http://arxiv.org/pdf/2404.01349v2
Large Language Models (LLMs) have demonstrated remarkable success across various domains. However, despite their promising performance in numerous real-world applications, most of these algorithms lack fairness considerations. Consequently, they may lead to discriminatory outcomes against certain communities, particula...
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs
2505.23996v1
wang2024ceb
\cite{wang2024ceb}
CEB: Compositional Evaluation Benchmark for Fairness in Large Language Models
http://arxiv.org/abs/2407.02408v2
As Large Language Models (LLMs) are increasingly deployed to handle various natural language processing (NLP) tasks, concerns regarding the potential negative societal impacts of LLM-generated content have also arisen. To evaluate the biases exhibited by LLMs, researchers have recently proposed a variety of datasets. H...
true
true
Wang, Song and Wang, Peng and Zhou, Tong and Dong, Yushun and Tan, Zhen and Li, Jundong
2,024
null
null
null
arXiv preprint arXiv:2407.02408
CEB: Compositional Evaluation Benchmark for Fairness in Large Language Models
CEB: Compositional Evaluation Benchmark for Fairness in Large...
https://openreview.net/forum?id=IUmj2dw5se
Summary: This paper proposes a comprehensive benchmark for bias and fairness in large language models. The authors first propose a multi-layers taxonomy that
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs
2505.23996v1
ye2024benchmarking
\cite{ye2024benchmarking}
Benchmarking LLMs via Uncertainty Quantification
http://arxiv.org/abs/2401.12794v3
The proliferation of open-source Large Language Models (LLMs) from various institutions has highlighted the urgent need for comprehensive evaluation methods. However, current evaluation platforms, such as the widely recognized HuggingFace open LLM leaderboard, neglect a crucial aspect -- uncertainty, which is vital for...
true
true
Ye, Fanghua and Yang, Mingming and Pang, Jianhui and Wang, Longyue and Wong, Derek F and Yilmaz, Emine and Shi, Shuming and Tu, Zhaopeng
2,024
null
null
null
arXiv preprint arXiv:2401.12794
Benchmarking LLMs via Uncertainty Quantification
Benchmarking LLMs via Uncertainty Quantification
http://arxiv.org/pdf/2401.12794v3
The proliferation of open-source Large Language Models (LLMs) from various institutions has highlighted the urgent need for comprehensive evaluation methods. However, current evaluation platforms, such as the widely recognized HuggingFace open LLM leaderboard, neglect a crucial aspect -- uncertainty, which is vital for...
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs
2505.23996v1
fabris2022algorithmic
\cite{fabris2022algorithmic}
Algorithmic Fairness Datasets: the Story so Far
http://arxiv.org/abs/2202.01711v4
Data-driven algorithms are studied in diverse domains to support critical decisions, directly impacting people's well-being. As a result, a growing community of researchers has been investigating the equity of existing algorithms and proposing novel ones, advancing the understanding of risks and opportunities of automa...
true
true
Fabris, Alessandro and Messina, Stefano and Silvello, Gianmaria and Susto, Gian Antonio
2,022
null
null
null
null
Algorithmic Fairness Datasets: the Story so Far
Algorithmic Fairness Datasets: the Story so Far
http://arxiv.org/pdf/2202.01711v4
Data-driven algorithms are studied in diverse domains to support critical decisions, directly impacting people's well-being. As a result, a growing community of researchers has been investigating the equity of existing algorithms and proposing novel ones, advancing the understanding of risks and opportunities of automa...