The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code: DatasetGenerationCastError
Exception: DatasetGenerationCastError
Message: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 25 new columns ({'extracted_with_lim_Div7', 'Concatenated Text', 'review_summary_Div6', 'col_6', 'extracted_with_lim_Div6', 'extracted_with_lim_Div3', 'col_7', 'extracted_with_lim_Div4', 'review_summary_Div5', 'review_summary_Div4', 'forum', 'col_5', 'extracted_with_lim_Div5', 'extracted_with_lim_Div2', 'col_3', 'review_summary_Div3', 'col_8', 'review_summary_Div2', 'review_summary_Div7', 'col_4', 'col_2', 'Lim_word_count', 'Abstract', 'keywords', 'Concatenated_Limitations'}) and 6 missing columns ({'id', 'Future_Work', 'references', 'authors', 'abstract', 'category'}).
This happened while the csv dataset builder was generating data using
hf://datasets/datalab2/Limitation_generation_dataset/NeurIps 21_22.csv (at revision b1220f19db7f0a7c6c64e9c22cb6e77e37c0c391)
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1871, in _prepare_split_single
writer.write_table(table)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 643, in write_table
pa_table = table_cast(pa_table, self._schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2293, in table_cast
return cast_table_to_schema(table, schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2241, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
forum: string
title: string
keywords: string
Abstract: string
Introduction: string
Related_Work: string
Methodology: string
Dataset: string
Conclusion: string
Experiment_and_Results: string
col_2: string
col_3: string
col_4: string
col_5: string
col_6: string
col_7: string
col_8: string
Concatenated Text: string
Limitation: string
extracted_with_lim_Div2: string
extracted_with_lim_Div3: string
extracted_with_lim_Div4: string
extracted_with_lim_Div5: string
extracted_with_lim_Div6: string
extracted_with_lim_Div7: string
review_summary_Div2: string
review_summary_Div3: string
review_summary_Div4: string
review_summary_Div5: string
review_summary_Div6: string
review_summary_Div7: string
Lim_word_count: int64
Concatenated_Limitations: string
-- schema metadata --
pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 4463
to
{'title': Value(dtype='string', id=None), 'authors': Value(dtype='string', id=None), 'id': Value(dtype='string', id=None), 'abstract': Value(dtype='string', id=None), 'Introduction': Value(dtype='string', id=None), 'Related_Work': Value(dtype='string', id=None), 'Methodology': Value(dtype='string', id=None), 'Dataset': Value(dtype='string', id=None), 'Conclusion': Value(dtype='string', id=None), 'Experiment_and_Results': Value(dtype='string', id=None), 'Future_Work': Value(dtype='string', id=None), 'Limitation': Value(dtype='string', id=None), 'references': Value(dtype='string', id=None), 'category': Value(dtype='string', id=None)}
because column names don't match
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1433, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1050, in convert_to_parquet
builder.download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 925, in download_and_prepare
self._download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1001, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1742, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1873, in _prepare_split_single
raise DatasetGenerationCastError.from_cast_error(
datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 25 new columns ({'extracted_with_lim_Div7', 'Concatenated Text', 'review_summary_Div6', 'col_6', 'extracted_with_lim_Div6', 'extracted_with_lim_Div3', 'col_7', 'extracted_with_lim_Div4', 'review_summary_Div5', 'review_summary_Div4', 'forum', 'col_5', 'extracted_with_lim_Div5', 'extracted_with_lim_Div2', 'col_3', 'review_summary_Div3', 'col_8', 'review_summary_Div2', 'review_summary_Div7', 'col_4', 'col_2', 'Lim_word_count', 'Abstract', 'keywords', 'Concatenated_Limitations'}) and 6 missing columns ({'id', 'Future_Work', 'references', 'authors', 'abstract', 'category'}).
This happened while the csv dataset builder was generating data using
hf://datasets/datalab2/Limitation_generation_dataset/NeurIps 21_22.csv (at revision b1220f19db7f0a7c6c64e9c22cb6e77e37c0c391)
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
title
string | authors
string | id
string | abstract
string | Introduction
string | Related_Work
string | Methodology
string | Dataset
string | Conclusion
string | Experiment_and_Results
string | Future_Work
string | Limitation
string | references
string | category
string |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Quantized Side Tuning: Fast and Memory-Efficient Tuning of Quantized Large Language Models
|
[{"affiliations": [], "name": "Zhengxin Zhang"}, {"affiliations": [], "name": "Dan Zhao"}, {"affiliations": [], "name": "Xupeng Miao"}, {"affiliations": [], "name": "Gabriele Oliaro"}, {"affiliations": [], "name": "Zhihao Zhang"}, {"affiliations": [], "name": "Qing Li"}, {"affiliations": [], "name": "Yong Jiang"}, {"affiliations": [], "name": "Zhihao Jia"}]
|
SP:5360da9e3f49cc7e048171d345a89bf468de8c61
|
Finetuning large language models (LLMs) has been empirically effective on a variety of downstream tasks. Existing approaches to finetuning an LLM either focus on parameter-efficient finetuning, which only updates a small number of trainable parameters, or attempt to reduce the memory footprint during the training phase of the finetuning. Typically, the memory footprint during finetuning stems from three contributors: model weights, optimizer states, and intermediate activations. However, existing works still require considerable memory, and none can simultaneously mitigate the memory footprint of all three sources. In this paper, we present quantized side tuing (QST), which enables memory-efficient and fast finetuning of LLMs by operating through a dual-stage process. First, QST quantizes an LLM’s model weights into 4-bit to reduce the memory footprint of the LLM’s original weights. Second, QST introduces a side network separated from the LLM, which utilizes the hidden states of the LLM to make task-specific predictions. Using a separate side network avoids performing backpropagation through the LLM, thus reducing the memory requirement of the intermediate activations. Finally, QST leverages several lowrank adaptors and gradient-free downsample modules to significantly reduce the trainable parameters, so as to save the memory footprint of the optimizer states. Experiments show that QST can reduce the total memory footprint by up to 2.3× and speed up the finetuning process by up to 3× while achieving competent performance compared with the state-of-theart. When it comes to full finetuning, QST can reduce the total memory footprint up to 7×.
|
1 introduction :Recent advancements in large language models (LLMs), including GPT (Brown et al., 2020; Floridi and Chiriatti, 2020; OpenAI, 2023), PaLM (Chowdhery et al., 2022), OPT (Zhang et al., 2022), and LLaMA (Touvron et al., 2023), have showcased remarkable taskgeneralization capabilities across diverse applications
(Stiennon et al., 2020; Dosovitskiy et al., 2020). The ongoing evolution of LLMs’ capabilities is accompanied by exponential increases in LLMs’ sizes, with some models encompassing 100 billion parameters (Raffel et al., 2020; Scao et al., 2022). Finetuning pre-trained LLMs (Min et al., 2021; Wang et al., 2022b,a; Liu et al., 2022) for customized downstream tasks provides an effective approach to introducing desired behaviors, mitigating undesired ones, and thus boosting the LLMs’ performance (Ouyang et al., 2022; Askell et al., 2021; Bai et al., 2022). Nevertheless, the process of LLM finetuning is characterized by its substantial memory demands. For instance, finetuning a 16-bit LLaMA model with 65 billion parameters requires more than 780GB of memory (Dettmers et al., 2023).
To reduce the computational requirement of LLM finetuning, recent work introduces parameter-efficient finetuning (PEFT), which updates a subset of trainable parameters from an LLM or introduces a small number of new parameters into the LLM while keeping the vast majority of the original LLM parameters frozen (Houlsby et al., 2019; Li and Liang, 2021; Pfeiffer et al., 2020; Hu et al., 2021; He et al., 2021; Lester et al., 2021). PEFT methods achieve comparable performance as full finetuning while enabling fast adaption to new tasks without suffering from catastrophic forgetting (Pfeiffer et al., 2020). However, PEFT methods necessitate caching intermediate activations during forward processing, since these activations are needed to update trainable parameters during backward propagation. As a result, PEFT methods require saving more than 70% of activations and almost the same training time compared to full finetuning (Liao et al., 2023; Sung et al., 2022). Concisely, existing PEFT techniques cannot effectively reduce the memory footprint of LLM finetuning, restricting their applications in numerous real-world memory-constrained scenarios.
Recent work has also introduced approaches to combining PEFT and quantization. For example, QLoRA (Dettmers et al., 2023) quantizes an LLM’s weights to 4-bit and leverages low-rank adaption (LoRA) (He et al., 2021) to finetune the quantized LLM. QLoRA reduces the memory footprint of an LLM’s weights and optimizer states, and as a result, finetuning a 65B LLM requires less than 48 GB of memory. However, QLoRA does not consider the memory footprint of intermediate
1
140GB
420GB
197GB
Full Funtuning
36GB
13GB
Reduce the weights and the
optimzer states
QLoRA
24GB197GB
78GB
140GB
LST
4GB 69GB
QST (Ours)
36GB
Reduce the intermediate activations
and the optimzer states
Reduce all
Intermediate activations
Optimizer states
Weights
(a) Comparing the memory requirement of different methods.
50 100 150 200 250 300 Memory (GB)
45
50
55
60
65
A cc
ur ac
y (%
)
QLoRA LST QST (Ours)
(b) Comparing the predictive performance of different finetuning techniques.
Figure 1: Figure 1a shows the memory footprint of different methods of fintuning LLaMA-2-70b. Figure 1b shows the MMLU 5-shot accuracy of different methods when tuning LLaMA-2-7B, LLaMA-2-13B, and LLaMA-2-70B. Note that we set the batch size to 16 and the sequence length to 384. Larger markers represent larger models.
activations, which can be particularly large when using a large batch size for finetuning. As a result, QLoRA only supports small-batch training (e.g. a batch size of 1), and finetuning a 65B LLM requires checkpointing gradients (Chen et al., 2016) to fit the LLM on a single 48GB GPU, resulting in long training time. Besides, our evaluation also reveals that the performance of QLoRA becomes unstable when using 16-bit floating points. Sung et al. (2022) and Zhang et al. (2020) propose to use a side network to reduce the memory footprint of intermediate activations by avoiding backpropagation of the LLM on natural language processing (NLP) and computer vision (CV) tasks, respectively. Even with the adoption of a side network, the inherent model size of the LLM remains a challenge. Meanwhile, these approaches focus on small models (i.e., less than 3 billion parameters), and their applicability and efficacy for larger models remain unexplored.
In this paper, we propose a fast, memory-efficient LLM finetuning framework, called Quantized SideTuning (QST), which operates through a dual-stage process as shown in Figure 2. First, QST quantizes an LLM into 4-bit to reduce the memory footprint of its model weights. Second, QST introduces a side network separating from the quantized LLM to avoid performing backward propagation for the quantized LLM, thus saving the memory footprint of intermediate activations. During the training phase of QST, the input to each layer of the side network is formed by combining (1) the downsampled output of the corresponding quantized LLM layer and (2) the output of the previous layer of the side network. A larger LLM usually has a larger model depth (i.e., the number of layers) and width (the hidden size of each layer), which in turn requires more trainable parameters for the downsampling layers. Unlike Sung et al. (2022) that leverages linear layer to perform downsampling, QST uses several low-rank adapter
methods (He et al., 2021; Edalati et al., 2022) such as MaxPooling (LeCun et al., 1998) and AvgPooling, significantly reducing the required trainable parameters and the memory footprint for the optimizer states. After that, we use a learnable parameter to assign weights and subsequently aggregate the hidden states of the quantized LLM and the side network. Finally, we reuse the LLM head or classifier to predict. Combined with 4-bit quantization and side tuning, QST significantly reduces all three main contributors of the memory footprint and training time during the training phase. Besides, QST does not increase inference latency since the LLM and side network can be computed in parallel. Figure 1 compares the memory footprint of QST and existing parameter-efficient fine-tuning methods, including QLoRA and LST.
To validate the effectiveness of our QST, we conduct extensive evaluations for different types of LLMs (e.g., OPT, LLaMA 2), with 1.3B to 70B parameters, on various benchmarks. Experiment results show that QST can reduce the total memory footprint by up to 2.3 × and speed up the finetuning process by up to 3 × while achieving competent performance compared with the state-of-the-art. Our codes are released to the GitHub 1 .
|
Finetuning allows an LLM to adapt to specialized domains and tasks (Devlin et al., 2018; Radford et al., 2019; Brown et al., 2020). However, fully finetuning an LLM comes with high computation costs due to the rapidly increasing LLM sizes. Parameter-efficient finetuning (PEFT) methods are proposed to solve this issue. Drawing inspiration from the pronounced sensitivity of LLMs to prompts as highlighted in Schick and Schütze
1https://github.com/YouAreSpecialToMe/QST
(2020), a series of studies introduce trainable prompt embeddings prepended to the input text or attention components while preserving the original LLM parameters (Liu et al., 2023; Li and Liang, 2021; Lester et al., 2021). Rusu et al. (2016) and Houlsby et al. (2019) propose adapter modules to introduce new task-specific parameters, which are inserted into the Transformer layers inside the LLM. LoRA (Hu et al., 2021) leverages the low-rank decomposition concept to construct trainable parameters inserted into the original LLM weights. (IA)3 (Liu et al., 2022) proposes to scale the pre-trained model weights of an LLM with a trainable vector. Of late, there has been a surge in the proposal of unified approaches that amalgamate various PEFT methods by leveraging human heuristics (He et al., 2021) or employing neural architecture search (Zhou et al., 2023; Zoph and Le, 2016; Mao et al., 2021). Existing PEFT approaches focus on optimizing model performance while minimizing trainable parameters. However, a reduction in the number of trainable parameters does not inherently imply a corresponding reduction in memory footprint. Memory-efficient training and finetuning aims to reduce the memory footprint during the LLM training and/or finetuning phase. Reversible neural networks (Gomez et al., 2017; Kitaev et al., 2020; Mangalam et al., 2022) allow the intermediate activations of each layer to be recomputed from the activation of its next layer, thus exempting the need to save intermediate activations. Gradient checkpointing (Chen et al., 2016) offers an optimization strategy that balances computational re-
sources against memory footprint. Specifically, it reduces memory requirement by selectively discarding certain intermediate activations, which are subsequently recomputed through an additional forward pass when needed. Another line to enhancing memory efficiency involves network compression, that is, the original LLM is reduced to a more compact form, thereby making both the training and inference phases more computationally economical. Network pruning and distillation are the most prevalent strategies for network compression. Network distillation (Hinton et al., 2015; Koratana et al., 2019) involves the creation of a student network that is trained to approximate the output distribution of a teacher network across a specified dataset. Network pruning (Frankle and Carbin, 2018; Frankle et al., 2020) aims to streamline models by ascertaining the significance of individual parameters and subsequently eliminating those deemed non-essential. Compared with PEFT methods, network compression yields models optimized for expedited inference, whereas PEFT methods may achieve superior performance by updating a small set of trainable parameters.
Recently, QLoRA (Dettmers et al., 2023) quantizes the LLM to 4-bit and then adds LoRA to finetune the quantized LLM. QLoRA significantly reduces the memory footprint of weights and optimizer states compared with full finetuning while retaining similar performance. QLoRA does not consider the memory footprint of intermediate activations, and thus falls short in finetuning the LLM with a large batch size, resulting in a long training time. In the context of NLP and CV tasks, the studies by (Sung et al., 2022) and (Zhang et al., 2020) introduce the concept of employing a side network. The side
network aims to obviate the need for backpropagation through the LLM, thereby reducing the memory footprint associated with intermediate activations. Despite incorporating the side network, the inherent model size (i.e., the memory footprint of weights) of the LLM still poses computational challenges. Hence, both methods can only focus exclusively on models with fewer than 3 billion parameters, and fail to finetune models with more parameters.
| null | null |
In this paper, we propose Quantized Side Tuing (QST), a novel fast and memory-efficient finetuning framework. QST operates through a dual-stage process: first, QST quantizes the LLM into 4-bit to reduce the memory footprint of the weights in LLM; then QST introduces a side network separated from the LLM, which utilizes the hidden states of the LLM to make task-specific predictions. QST can significantly reduce the memory footprint of LLM finetuning compared to existing approaches. In particular, experiments show that QST can reduce the total memory footprint by up to 2.3 × and speed up the finetuning process by up to 3 × while achieving comparable performance compared with the state-of-the-art.
|
In this section, we empirically validate the effectiveness of our QST method by examining its performance for LLMs with different types (e.g., OPT and LLaMA 2), sizes (from 1.3B to 70B), and benchmarks. Datasets. We evaluate the performance of QST and several baselines on natural language understanding (NLU) and natural language generation tasks. For NLU experiments, we use the GLUE (Wang et al., 2018) (General Language Understanding Evaluation) and MMLU (Hendrycks et al., 2020) (Massively Multitask Language Understanding) benchmarks. The GLUE benchmark provides a comprehensive evaluation of models across a range of linguistic tasks. These tasks encompass linguistic acceptability as examined in CoLA (Warstadt et al., 2019), sentiment analysis as portrayed in SST2 (Socher et al., 2013), tasks probing similarity and paraphrase distinctions such as MRPC (Dolan and Brockett, 2005), QQP (Iyer, 2017), and STS-B (Cer et al., 2017), in addition to natural language inference tasks including
MNLI (Williams et al., 2017), QNLI (Rajpurkar et al., 2016), and RTE (Bentivogli et al., 2009). We report accuracy on MNLI, QQP, QNLI, SST-2, MRPC, and RTE, Pearson correlation coefficients on SST-B, and Mathews correlation coefficients (Matthews, 1975) on CoLA. The MMLU benchmark consists of 57 tasks including elementary mathematics, US history, computer science, law, and more. We report the average 5-shot test accuracy on the 57 tasks. Models. We use decoder-only LLMs such as the OPT series (OPT-1.3B, OPT-2.7B, OPT-6.7B, OPT13B, OPT-30B, and OPT-66B) and the LLaMA-2 series (LLaMA-2-7B, LLaMA-2-13B, and LLaMA-2-70B). Baselines. We compare QST with QLoRA (Dettmers et al., 2023), LST (Sung et al., 2022), LoRA (He et al., 2021), and Adapter (Houlsby et al., 2019). Note that we only compare LST, LoRA, and Adapter when the model size is less than 3B since their memory footprint of weights can be excessively huge beyond that. Implementation. We set the reduction factor r to 16 by default. We use Adapter as the downsample module, a linear layer as the upsample module, and set the rank of the Adapter to 16. We use the NF4 data type to store the weights of the LLM and bfloat16 as the data type for computation. We adopt the same parameters reported in QLoRA, LST, LoRA, and Adapter to construct the baselines. Other hyperparameters are specified in Appendix A and Appendix B. We run each experiment three times under different random seeds and report the average performance. We conduct all the experiments using Pytorch (Paszke et al., 2017) and HuggingFace library (Wolf et al., 2019) on 4 NVIDIA RTX A5000 GPUs, each with 24GB memory. Table 1 shows the performance of different methods on the GLUE benchmark. Overall, QST achieves the lowest memory footprint among all methods while attaining competent accuracy. Particularly, for relatively small models (i.e., OPT-1.3B and OPT-2.7B), QST re-
duces the memory footprint by around 2× compared with QLoRA, LoRA, and Adapter, while achieving comparable accuracy. Compared with LST, QST reduces the memory requirement by 3.2GB and 6.3GB for finetuning OPT-1.3B and OPT-2.7B. QST also reduces the trainable parameters by around 10× and 5× compared with QLoRA and the other baselines, respectively.
For larger models such as OPT-6.7B, we focus on comparing QST with QLoRA. This is because QLoRA has similar accuracy with the other baselines, but LoRA, Adapter, and LST all have excessively huge memory footprints of weights when it comes to finetuning OPT6.7B2. Compared with QLoRA, QST reduces the memory footprint and trainable parameters by 2.3× and 5.5×, while only introducing a 1.3% accuracy drop. The experiment results of the MMLU benchmark are shown in Table 2. We set the batch size to 4 and the sequence length to 384. We use the Alpaca dataset (Taori et al., 2023) to finetune both QLoRA and QST. We compare QST with QLoRA on accuracy and memory requirement over OPT-1.3B, OPT-2.7B, OPT-6.7B, OPT-13B, OPT-30B, OPT-66B, LLaMA-2-7B, LLaMA2-13B, and LLaMA-2-70B. QST improves the accuracy by 0.1% on average while reducing the memory footprint by 1.8× compared with QLoRA. Particularly, QST yields an enhancement of 2.1% in accuracy over QLoRa when finetuning LLaMA-2-13B. When finetuning the OPT-2.7B, OPT-6.7B, and OPT-13B models, QST achieves 0.3%, 0.6%, and 0.3% accuracy improvements, respectively. Effects of batch size. Figure 5(a) illustrates the effects of batch size for different methods. We use LLaMA2-70B as the LLM and set the sequence length to 512.
2QLoRA can leverage gradient accumulation to finetune with a batch size of 16 while guaranteeing an affordable memory footprint.
While the memory footprint of all methods increases as the batch size increases, QST achieves the lowest memory footprint among all, regardless of the batch size. Particularly, the memory footprint of QST is only one-third of LoRA and Adapter. Besides, the memory footprint of both QST and LST grows less drastically than QLoRa, Adapter, and LoRa as the batch size increases. This is because both LST and QST use side tuning to reduce the hidden dimension of the intermediate activations, thereby alleviating the growth of memory footprint induced by intermediate activations. QST also achieves an additional reduction of approximately 100GB in memory footprint compared to LST, thanks to the 4-bit quantization design that effectively compresses the memory footprint of the weights and well design of the downsample modules to reduce the optimizer states.
Effects of the model size. Figure 5(b) shows the effects of the total model bits on different methods. We use the OPT model series and set the batch size to 4. Due to the 4-bit quantization, QST and QLoRA reduce the memory footprint compared with the other baselines. The memory footprint gap further widens as the model size increases. Besides, QST achieves around 2 times reduction in memory footprint compared with QLoRA thanks to its small volume of trainable parameters and intermediate activations.
Effects of sequence length. Figure 5(c) shows the effects of sequence length on different methods. We use LLaMA-2-70B and set the batch size to 4. Similar to the effect of batch size, LST and QST alleviate the growth rate of memory footprint of intermediate activations, while QST further achieves around 100GB reduction in memory footprint compared with LST. Table 3 shows the training throughput of different methods, measured by FLOPS per token (the lower the better), on LLaMA-2-7B, LLaMA-2-13B, and LLaMA2-70B. While the FLOPS per token of all methods increases as the model size grows, QST achieves the lowest FLOPS per token among all. Particularly, QST achieves around 2.5× speed up compared with the baselines. LST suffers from the highest FLOPS per token. The FLOPS per token of QLoRA is slightly higher than LoRA and Adapter since QLoRA adds more LoRA components. Effects of reduction factor r. We conduct experiments using LLaMA-2-7B, LLaMA-2-13B, and LLaMA-270B to verify the effects of reduction factor r (from 2 to 64) on memory footprint, MMLU accuracy, and throughput. We set the batch size to 4 and the sequence length to 384. The MMLU accuracy changes slightly as r varies as shown in Figure 5a. QST achieves the best accuracy of finetuning LLaMA-2-7B and LLaMA-213B when r is set to 16. As shown in Figure 5b and 5c, the memory footprint and the FLOPS per token decrease
drastically when r varies from 2 to 16 for finetuning all the models. The memory footprint and the FLOPS per token decrease slightly when r varies from 16 to 64. Therefore, we use r to 16 in our experiments as default. Effects of 4-bit data types. We evaluate two 4-bit data types: FP4 and NF4 using the LLaMA-2 model series and the MMLU benchmark. As shown in Table 4, NF4 improves the average accuracy by about 0.8% compared with FP4. Therefore, we use NF4 as the default 4-bit data type in our experiments.
Effects of computation data types. We analyze the effects of two computation data types: BF16 (results shown in Table 1) and FP16 (results shown in Table 5). As can be seen, QST retains sim-
ilar results using FP16 and BF16. On the other hand, QLoRA is unstable using FP16 as the computation data type. We finetune OPT-6.7B on the GLUE benchmark and discover that QLoRA fails to finetune on the MRPC and QNLI datasets. We run each dataset under three different random seeds and QLoRA fails on two of them. Effects of downsample modules. We conduct experiments on different downsample modules: Linear, LoRA, Adapter, MaxPooling, and AvgPooling using LLaMA2-7B and the MMLU benchmark. As shown in Table 6, using Adapter as the downsample module achieves the best performance among all baselines, and reduces the trainable parameters and memory footprint. We conduct experiments on Chatbot performance using MT-benchmark (Zheng et al., 2023). MT-benchmark is
a set of challenging multi-turn open-ended questions for evaluating the chat assistant’s performance in writing, roleplay, reasoning, math, coding, extraction, STEM, and humanities categories. In our experiments, we use GPT-4 to act as judges and assess the quality of the responses of the model finetuned by QLoRA and QST. We finetune LLaMA-2-70B using a variant of OASST1 (Dettmers et al., 2023). Table 7 shows the experiment results of QLoRA and QST on the total training time, memory footprint, and the average MT-Bench score over 8 categories. QST speeds up the training by 3.2 × and reduces memory footprint by 1.7 ×, with even an improved score of 0.46 compared with QLoRA. Notably, QST’s chatbot performance outperforms the original LLaMA-2-70B, achieving an improvement of 0.21.
Figure 6 compares QST, QLoRA, and the original LLaMA-2-70B in different categories using MT-Bench. Out of the 8 categories, QST-70B outperforms QLoRA and LLaMA-2-70B in four of them: STEM, Extraction, Coding, and Roleplay. In the Humanities category, all models demonstrate comparable proficiency. For the remaining three categories, QLoRA takes the lead in Reasoning and Writing, and the original LLaMA-2-70B
shows superior performance in Mathematics.
| null | null |
[{"authors": ["Amanda Askell", "Yuntao Bai", "Anna Chen", "Dawn Drain", "Deep Ganguli", "Tom Henighan", "Andy Jones", "Nicholas Joseph", "Ben Mann", "Nova DasSarma"], "title": "A general language assistant as a laboratory", "year": 2021}, {"authors": ["Yuntao Bai", "Andy Jones", "Kamal Ndousse", "Amanda Askell", "Anna Chen", "Nova DasSarma", "Dawn Drain", "Stanislav Fort", "Deep Ganguli", "Tom Henighan"], "title": "Training a helpful and harmless assistant with reinforcement learning from human", "year": 2022}, {"authors": ["Luisa Bentivogli", "Peter Clark", "Ido Dagan", "Danilo Giampiccolo."], "title": "The fifth pascal recognizing textual entailment challenge", "venue": "TAC, 7:8.", "year": 2009}, {"authors": ["Tom Brown", "Benjamin Mann", "Nick Ryder", "Melanie Subbiah", "Jared D Kaplan", "Prafulla Dhariwal", "Arvind Neelakantan", "Pranav Shyam", "Girish Sastry", "Amanda Askell"], "title": "Language models are few-shot learners. Advances in neural information processing", "year": 2020}, {"authors": ["Daniel Cer", "Mona Diab", "Eneko Agirre", "Inigo LopezGazpio", "Lucia Specia."], "title": "Semeval-2017 task 1: Semantic textual similarity-multilingual and cross-lingual focused evaluation", "venue": "arXiv preprint arXiv:1708.00055.", "year": 2017}, {"authors": ["Tianqi Chen", "Bing Xu", "Chiyuan Zhang", "Carlos Guestrin."], "title": "Training deep nets with sublinear memory cost", "venue": "arXiv preprint arXiv:1604.06174.", "year": 2016}, {"authors": ["Aakanksha Chowdhery", "Sharan Narang", "Jacob Devlin", "Maarten Bosma", "Gaurav Mishra", "Adam Roberts", "Paul Barham", "Hyung Won Chung", "Charles Sutton", "Sebastian Gehrmann"], "title": "Palm: Scaling language modeling with pathways", "year": 2022}, {"authors": ["Tim Dettmers", "Artidoro Pagnoni", "Ari Holtzman", "Luke Zettlemoyer."], "title": "Qlora: Efficient finetuning of quantized llms", "venue": "arXiv preprint arXiv:2305.14314.", "year": 2023}, {"authors": ["Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova."], "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805.", "year": 2018}, {"authors": ["Bill Dolan", "Chris Brockett."], "title": "Automatically constructing a corpus of sentential paraphrases", "venue": "Third International Workshop on Paraphrasing (IWP2005).", "year": 2005}, {"authors": ["Alexey Dosovitskiy", "Lucas Beyer", "Alexander Kolesnikov", "Dirk Weissenborn", "Xiaohua Zhai", "Thomas Unterthiner", "Mostafa Dehghani", "Matthias Minderer", "Georg Heigold", "Sylvain Gelly"], "title": "An image is worth 16x16 words: Transformers", "year": 2020}, {"authors": ["Ali Edalati", "Marzieh Tahaei", "Ivan Kobyzev", "Vahid Partovi Nia", "James J Clark", "Mehdi Rezagholizadeh."], "title": "Krona: Parameter efficient tuning with kronecker adapter", "venue": "arXiv preprint arXiv:2212.10650.", "year": 2022}, {"authors": ["Luciano Floridi", "Massimo Chiriatti."], "title": "Gpt-3: Its nature, scope, limits, and consequences", "venue": "Minds and Machines, 30:681\u2013694.", "year": 2020}, {"authors": ["Jonathan Frankle", "Michael Carbin."], "title": "The lottery ticket hypothesis: Finding sparse, trainable neural networks", "venue": "arXiv preprint arXiv:1803.03635.", "year": 2018}, {"authors": ["Jonathan Frankle", "Gintare Karolina Dziugaite", "Daniel Roy", "Michael Carbin."], "title": "Linear mode connectivity and the lottery ticket hypothesis", "venue": "International Conference on Machine Learning, pages 3259\u20133269. PMLR.", "year": 2020}, {"authors": ["Aidan N Gomez", "Mengye Ren", "Raquel Urtasun", "Roger B Grosse"], "title": "The reversible residual network: Backpropagation without storing activations", "venue": "Advances in neural information processing systems,", "year": 2017}, {"authors": ["Junxian He", "Chunting Zhou", "Xuezhe Ma", "Taylor BergKirkpatrick", "Graham Neubig"], "title": "Towards a unified view of parameter-efficient transfer learning", "venue": "arXiv preprint arXiv:2110.04366", "year": 2021}, {"authors": ["Dan Hendrycks", "Collin Burns", "Steven Basart", "Andy Zou", "Mantas Mazeika", "Dawn Song", "Jacob Steinhardt."], "title": "Measuring massive multitask language understanding", "venue": "arXiv preprint arXiv:2009.03300.", "year": 2020}, {"authors": ["Geoffrey Hinton", "Oriol Vinyals", "Jeff Dean."], "title": "Distilling the knowledge in a neural network", "venue": "arXiv preprint arXiv:1503.02531.", "year": 2015}, {"authors": ["Neil Houlsby", "Andrei Giurgiu", "Stanislaw Jastrzebski", "Bruna Morrone", "Quentin De Laroussilhe", "Andrea Gesmundo", "Mona Attariyan", "Sylvain Gelly."], "title": "Parameter-efficient transfer learning for nlp", "venue": "International Conference on Machine Learning, pages", "year": 2019}, {"authors": ["Edward J Hu", "Yelong Shen", "Phillip Wallis", "Zeyuan Allen-Zhu", "Yuanzhi Li", "Shean Wang", "Lu Wang", "Weizhu Chen."], "title": "Lora: Low-rank adaptation of large language models", "venue": "arXiv preprint arXiv:2106.09685.", "year": 2021}, {"authors": ["Shankar Iyer."], "title": "First quora dataset release: Question pairs", "venue": "https://quoradata.quora. com/First-Quora-Dataset-Release-\\ Question-Pairs.", "year": 2017}, {"authors": ["Diederik P Kingma", "Jimmy Ba."], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980.", "year": 2014}, {"authors": ["Nikita Kitaev", "\u0141ukasz Kaiser", "Anselm Levskaya."], "title": "Reformer: The efficient transformer", "venue": "arXiv preprint arXiv:2001.04451.", "year": 2020}, {"authors": ["Animesh Koratana", "Daniel Kang", "Peter Bailis", "Matei Zaharia."], "title": "Lit: Learned intermediate representation training for model compression", "venue": "International Conference on Machine Learning, pages 3509\u20133518. PMLR.", "year": 2019}, {"authors": ["Yann LeCun", "L\u00e9on Bottou", "Yoshua Bengio", "Patrick Haffner."], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE, 86(11):2278\u20132324.", "year": 1998}, {"authors": ["Brian Lester", "Rami Al-Rfou", "Noah Constant."], "title": "The power of scale for parameter-efficient prompt tuning", "venue": "arXiv preprint arXiv:2104.08691.", "year": 2021}, {"authors": ["Xiang Lisa Li", "Percy Liang."], "title": "Prefix-tuning: Optimizing continuous prompts for generation", "venue": "arXiv preprint arXiv:2101.00190.", "year": 2021}, {"authors": ["Baohao Liao", "Shaomu Tan", "Christof Monz."], "title": "Make your pre-trained model reversible: From parameter to memory efficient fine-tuning", "venue": "arXiv preprint arXiv:2306.00477.", "year": 2023}, {"authors": ["Haokun Liu", "Derek Tam", "Mohammed Muqeeth", "Jay Mohta", "Tenghao Huang", "Mohit Bansal", "Colin A Raffel."], "title": "Few-shot parameter-efficient fine-tuning is better and cheaper than in-context learning", "venue": "Advances in Neural Information Processing Systems,", "year": 2022}, {"authors": ["Xiao Liu", "Yanan Zheng", "Zhengxiao Du", "Ming Ding", "Yujie Qian", "Zhilin Yang", "Jie Tang."], "title": "Gpt understands, too", "venue": "AI Open.", "year": 2023}, {"authors": ["Karttikeya Mangalam", "Haoqi Fan", "Yanghao Li", "ChaoYuan Wu", "Bo Xiong", "Christoph Feichtenhofer", "Jitendra Malik."], "title": "Reversible vision transformers", "venue": "Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages", "year": 2022}, {"authors": ["Yuning Mao", "Lambert Mathias", "Rui Hou", "Amjad Almahairi", "Hao Ma", "Jiawei Han", "Wen-tau Yih", "Madian Khabsa."], "title": "Unipelt: A unified framework for parameter-efficient language model tuning", "venue": "arXiv preprint arXiv:2110.07577.", "year": 2021}, {"authors": ["Brian W Matthews."], "title": "Comparison of the predicted and observed secondary structure of t4 phage lysozyme", "venue": "Biochimica et Biophysica Acta (BBA)Protein Structure, 405(2):442\u2013451.", "year": 1975}, {"authors": ["Sewon Min", "Mike Lewis", "Luke Zettlemoyer", "Hannaneh Hajishirzi."], "title": "Metaicl: Learning to learn in context", "venue": "arXiv preprint arXiv:2110.15943.", "year": 2021}, {"authors": ["OpenAI."], "title": "GPT-4 technical report", "venue": "CoRR, abs/2303.08774.", "year": 2023}, {"authors": ["Long Ouyang", "Jeffrey Wu", "Xu Jiang", "Diogo Almeida", "Carroll Wainwright", "Pamela Mishkin", "Chong Zhang", "Sandhini Agarwal", "Katarina Slama", "Alex Ray"], "title": "Training language models to follow instructions with human", "year": 2022}, {"authors": ["Adam Paszke", "Sam Gross", "Soumith Chintala", "Gregory Chanan", "Edward Yang", "Zachary DeVito", "Zeming Lin", "Alban Desmaison", "Luca Antiga", "Adam Lerer"], "title": "Automatic differentiation in pytorch", "year": 2017}, {"authors": ["Jonas Pfeiffer", "Aishwarya Kamath", "Andreas R\u00fcckl\u00e9", "Kyunghyun Cho", "Iryna Gurevych."], "title": "Adapterfusion: Non-destructive task composition for transfer learning", "venue": "arXiv preprint arXiv:2005.00247.", "year": 2020}, {"authors": ["Alec Radford", "Jeffrey Wu", "Rewon Child", "David Luan", "Dario Amodei", "Ilya Sutskever"], "title": "Language models are unsupervised multitask learners", "venue": "OpenAI blog,", "year": 2019}, {"authors": ["Colin Raffel", "Noam Shazeer", "Adam Roberts", "Katherine Lee", "Sharan Narang", "Michael Matena", "Yanqi Zhou", "Wei Li", "Peter J Liu."], "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "venue": "The Journal of Machine Learning Research,", "year": 2020}, {"authors": ["Pranav Rajpurkar", "Jian Zhang", "Konstantin Lopyrev", "Percy Liang"], "title": "Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250", "year": 2016}, {"authors": ["Andrei A Rusu", "Neil C Rabinowitz", "Guillaume Desjardins", "Hubert Soyer", "James Kirkpatrick", "Koray Kavukcuoglu", "Razvan Pascanu", "Raia Hadsell."], "title": "Progressive neural networks", "venue": "arXiv preprint arXiv:1606.04671.", "year": 2016}, {"authors": ["Teven Le Scao", "Angela Fan", "Christopher Akiki", "Ellie Pavlick", "Suzana Ili\u0107", "Daniel Hesslow", "Roman Castagn\u00e9", "Alexandra Sasha Luccioni", "Fran\u00e7ois Yvon", "Matthias Gall\u00e9"], "title": "Bloom: A 176bparameter open-access multilingual language model", "year": 2022}, {"authors": ["Timo Schick", "Hinrich Sch\u00fctze."], "title": "Exploiting cloze questions for few shot text classification and natural language inference", "venue": "arXiv preprint arXiv:2001.07676.", "year": 2020}, {"authors": ["Richard Socher", "Alex Perelygin", "Jean Wu", "Jason Chuang", "Christopher D Manning", "Andrew Y Ng", "Christopher Potts."], "title": "Recursive deep models for semantic compositionality over a sentiment treebank", "venue": "Proceedings of the 2013 conference on empiri-", "year": 2013}, {"authors": ["Nisan Stiennon", "Long Ouyang", "Jeffrey Wu", "Daniel Ziegler", "Ryan Lowe", "Chelsea Voss", "Alec Radford", "Dario Amodei", "Paul F Christiano."], "title": "Learning to summarize with human feedback", "venue": "Advances in Neural Information Processing Systems, 33:3008\u2013", "year": 2020}, {"authors": ["Yi-Lin Sung", "Jaemin Cho", "Mohit Bansal."], "title": "Lst: Ladder side-tuning for parameter and memory efficient transfer learning", "venue": "Advances in Neural Information Processing Systems, 35:12991\u201313005.", "year": 2022}, {"authors": ["Rohan Taori", "Ishaan Gulrajani", "Tianyi Zhang", "Yann Dubois", "Xuechen Li", "Carlos Guestrin", "Percy Liang", "Tatsunori B. Hashimoto."], "title": "Stanford alpaca: An instruction-following llama model", "venue": "https://github.com/tatsu-lab/", "year": 2023}, {"authors": ["Hugo Touvron", "Thibaut Lavril", "Gautier Izacard", "Xavier Martinet", "Marie-Anne Lachaux", "Timoth\u00e9e Lacroix", "Baptiste Rozi\u00e8re", "Naman Goyal", "Eric Hambro", "Faisal Azhar"], "title": "Llama: Open and efficient foundation language models", "year": 2023}, {"authors": ["Alex Wang", "Amanpreet Singh", "Julian Michael", "Felix Hill", "Omer Levy", "Samuel R Bowman."], "title": "Glue: A multi-task benchmark and analysis platform for natural language understanding", "venue": "arXiv preprint arXiv:1804.07461.", "year": 2018}, {"authors": ["Yizhong Wang", "Yeganeh Kordi", "Swaroop Mishra", "Alisa Liu", "Noah A Smith", "Daniel Khashabi", "Hannaneh Hajishirzi."], "title": "Self-instruct: Aligning language model with self generated instructions", "venue": "arXiv preprint arXiv:2212.10560.", "year": 2022}, {"authors": ["Yizhong Wang", "Swaroop Mishra", "Pegah Alipoormolabashi", "Yeganeh Kordi", "Amirreza Mirzaei", "Anjana Arunkumar", "Arjun Ashok", "Arut Selvan Dhanasekaran", "Atharva Naik", "David Stap"], "title": "Supernaturalinstructions: Generalization via declarative", "year": 2022}, {"authors": ["Alex Warstadt", "Amanpreet Singh", "Samuel R Bowman."], "title": "Neural network acceptability judgments", "venue": "Transactions of the Association for Computational Linguistics, 7:625\u2013641.", "year": 2019}, {"authors": ["Jason Wei", "Xuezhi Wang", "Dale Schuurmans", "Maarten Bosma", "Fei Xia", "Ed Chi", "Quoc V Le", "Denny Zhou"], "title": "Chain-of-thought prompting elicits reasoning in large language models", "venue": "Advances in Neural Information Processing Systems,", "year": 2022}, {"authors": ["Adina Williams", "Nikita Nangia", "Samuel R Bowman."], "title": "A broad-coverage challenge corpus for sentence understanding through inference", "venue": "arXiv preprint arXiv:1704.05426.", "year": 2017}, {"authors": ["Thomas Wolf", "Lysandre Debut", "Victor Sanh", "Julien Chaumond", "Clement Delangue", "Anthony Moi", "Pierric Cistac", "Tim Rault", "R\u00e9mi Louf", "Morgan Funtowicz"], "title": "Huggingface\u2019s transformers: State-ofthe-art natural language processing", "year": 2019}, {"authors": ["Jeffrey O Zhang", "Alexander Sax", "Amir Zamir", "Leonidas Guibas", "Jitendra Malik."], "title": "Side-tuning: a baseline for network adaptation via additive side networks", "venue": "Computer Vision\u2013ECCV 2020: 16th European Conference, Glasgow, UK, August 23\u201328, 2020,", "year": 2020}, {"authors": ["Susan Zhang", "Stephen Roller", "Naman Goyal", "Mikel Artetxe", "Moya Chen", "Shuohui Chen", "Christopher Dewan", "Mona Diab", "Xian Li", "Xi Victoria Lin"], "title": "Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068", "year": 2022}, {"authors": ["Lianmin Zheng", "Wei-Lin Chiang", "Ying Sheng", "Siyuan Zhuang", "Zhanghao Wu", "Yonghao Zhuang", "Zi Lin", "Zhuohan Li", "Dacheng Li", "Eric. P Xing", "Hao Zhang", "Joseph E. Gonzalez", "Ion Stoica"], "title": "Judging llm-as-a-judge with mt-bench and chatbot arena", "year": 2023}, {"authors": ["Han Zhou", "Xingchen Wan", "Ivan Vuli\u0107", "Anna Korhonen."], "title": "Autopeft: Automatic configuration search for parameter-efficient fine-tuning", "venue": "arXiv preprint arXiv:2301.12132.", "year": 2023}, {"authors": ["Barret Zoph", "Quoc V Le."], "title": "Neural architecture search with reinforcement learning", "venue": "arXiv preprint arXiv:1611.01578.", "year": 2016}]
|
acl_23
|
Unsupervised Multimodal Clustering for Semantics Discovery in Multimodal Utterances
|
[{"affiliations": [], "name": "Hanlei Zhang"}, {"affiliations": [], "name": "Hua Xu"}, {"affiliations": [], "name": "Fei Long"}, {"affiliations": [], "name": "Xin Wang"}, {"affiliations": [], "name": "Kai Gao"}]
|
SP:061eb9278d3e424db94c192bbbaa458e16607908
|
Discovering the semantics of multimodal utterances is essential for understanding human language and enhancing human-machine interactions. Existing methods manifest limitations in leveraging nonverbal information for discerning complex semantics in unsupervised scenarios. This paper introduces a novel unsupervised multimodal clustering method (UMC), making a pioneering contribution to this field. UMC introduces a unique approach to constructing augmentation views for multimodal data, which are then used to perform pre-training to establish well-initialized representations for subsequent clustering. An innovative strategy is proposed to dynamically select high-quality samples as guidance for representation learning, gauged by the density of each sample’s nearest neighbors. Besides, it is equipped to automatically determine the optimal value for the top-K parameter in each cluster to refine sample selection. Finally, both highand low-quality samples are used to learn representations conducive to effective clustering. We build baselines on benchmark multimodal intent and dialogue act datasets. UMC shows remarkable improvements of 2-6% scores in clustering metrics over state-of-the-art methods, marking the first successful endeavor in this domain. The complete code and data are available at https://github.com/thuiar/UMC.
|
1 introduction :Discovering the semantics of dialogue utterances in unsupervised multimodal data requires integrating various modalities (i.e., text, video, and audio) to effectively mine the complicated semantics inherent in multimodal language. Conventional methods for semantics discovery typically focus solely on the text modality with clustering algorithms (Zhang et al., 2021a, 2023), failing to leverage the rich multimodal information in the real world (e.g., body language, facial expressions, and tones).
∗ Hua Xu is the corresponding author.
However, we argue that non-verbal modalities (i.e., video and audio) also play a critical role when performing unsupervised clustering. Taking Figure 1 as an example, relying solely on textual information yields clustering results that differ from the ground truth of multimodal cluster allocations (a detailed analysis on real-world examples is available in Appendix A), suggesting that non-verbal modalities can provide useful cues for semantics discovery. Moreover, effectively capturing multimodal interactions can yield more powerful and robust representations, thereby better addressing the challenges of ambiguous intent-cluster boundaries found in text-based clustering (see Section 6.3 and Appendix J). Discovering multimodal utterance semantics holds significant promise for a variety of applications, including video content recommendation, efficient multimodal data annotation, and virtual human technologies (detailed in Appendix B).
Understanding semantics in multimodal utterances has attracted much attention with the boom in multimodal language analysis (Poria et al., 2019; Saha et al., 2021b; Zhang et al., 2022a). For example, Saha et al. (2021b) annotated multimodal dialogue act (DA) labels on two popular multimodal multi-party conversational datasets (Busso
18
et al., 2008; Poria et al., 2019) and performed DA recognition using attention sub-networks build upon modality encoders. Zhang et al. (2022a) pioneered multimodal intent analysis, introducing a new dataset with multimodal intent labels and establishing baselines with three multimodal fusion methods (Tsai et al., 2019; Rahman et al., 2020; Hazarika et al., 2020). However, these works remain restricted within supervised tasks, i.e., the training target for each piece of data is known, which is not applicable in unsupervised scenarios.
In contrast, semantics discovery is an emerging field in NLP. It fundamentally operates as a clustering task and has seen the development of many unsupervised (Cheung and Li, 2012; Padmasundari and Bangalore, 2018; Haponchyk et al., 2018; Zhang et al., 2023) and semi-supervised (Lin et al., 2020; Zhang et al., 2021c, 2022b; Zhou et al., 2023) methods. However, these methods are primarily designed for the text-only modality and lack proficiency in handling the diverse modalities encountered in real-world scenarios. Thus, there is a lack of multimodal clustering methods for discovering utterance semantics, posing two challenges: (1) determining how to leverage information from nonverbal modalities to complement the text modality in clustering and (2) devising ways to fully exploit multimodal unlabeled data to learn clusteringfriendly representations.
To address these challenges, we introduce UMC, a novel unsupervised multimodal clustering algorithm for semantics discovery, as shown in Figure 2. We utilize the capabilities of the pre-trained language model (Devlin et al., 2019) to process text data. For the video and audio modalities, deep features are initially extracted using powerful backbones from computer vision and speech signal processing. Two transformer encoders are then employed to capture the deep semantics of these features. The text modality is designated as the anchor, guiding the learning of the other modalities. For this purpose, we concatenate features from all three modalities and mask the video or audio features with zero vectors, creating two sets of positive augmentation views. These multimodal representations and their augmentations are applied to an unsupervised contrastive loss, yielding wellinitialized representations for subsequent process.
To fully mine the semantic similarities among unsupervised multimodal data, we introduce a novel strategy that initially selects high-quality samples.
This strategy employs a dynamic sample selection threshold t, aiming to select the highest-quality t percent of samples in each iteration for training. This selection is based on a unique mechanism that calculates the density of each sample within its respective cluster and ranks them accordingly. Besides, an evaluation process is designed to automatically determine the optimal parameters for the top-K nearest neighbors from a set of candidates. After selecting high-quality samples, we propose a sequential process for multimodal representation learning. This process begins by learning from high-quality samples using supervised contrastive loss and then refines the remaining low-quality samples using unsupervised contrastive loss. This twostep approach promotes beneficial intra-class and inter-class relations among high-quality samples while pushing apart low-quality samples, thereby generating representations conducive to clustering. The entire process is repeated until the sample selection threshold t is met.
We summarize our contributions as follows: In this work, we make a pioneering contribution by formulating the challenging multimodal semantics discovery task. To solve this problem, we first introduce a novel method for constructing positive augmentations for multimodal data, effectively leveraging non-verbal modalities for unsupervised pre-training, which provides a good initialization for unsupervised clustering.
Then, we propose a new clustering algorithm, UMC, which features an innovative high-quality sample selection strategy and a sequential representation learning method between high- and lowquality samples, resulting in excellent performance across both single and multimodal modalities.
Finally, we establish baselines using benchmark multimodal intent and dialogue datasets. Extensive experiments show that the proposed UMC outperforms state-of-the-art clustering algorithms by a notable margin of 2-6% scores in standard clustering metrics. To the best of our knowledge, this is the first successful attempt at leveraging multiple modalities for unsupervised clustering, marking a substantial advancement in this area.
|
Unsupervised clustering is fundamental in machine learning. Classic clustering methods like KMeans (MacQueen et al., 1967) and Agglomerative
Clustering (Gowda and Krishna, 1978) iteratively assign clusters until convergence based on features. Deep clustering methods, like DEC (Xie et al., 2016) and DCN (Yang et al., 2017), enhance this process by jointly clustering and feature learning, employing stacked autoencoders (Vincent et al., 2010). DeepCluster (Caron et al., 2018) uses cluster assignments as guidance for feature learning.
Recent methods using contrastive learning (Chen et al., 2020) have achieved the state-of-the-art performance. For instance, SCCL (Zhang et al., 2021a) combines instance-level contrastive learning with cluster refinement from target distributions. CC (Kumar et al., 2022) optimizes contrastive losses at both instance and cluster levels to generate clustering-friendly representations. However, these methods focus on merely the single text or image modality and fall short with multimodal data. MCN (Chen et al., 2021) is tailored for multimodal clustering, learning a unified representation from all modalities and applying cross-modal contrastive losses during clustering. However, MCN struggles with complex utterance semantics. Intent discovery is a key challenge in NLP, with numerous clustering methods developed to address it. Early methods (Hakkani-Tür et al., 2015; Haponchyk et al., 2018) use weakly supervised signals to aid in clustering but struggle to capture the high-level semantics in text. Recent methods (Lin
et al., 2020; Zhang et al., 2021c; Mou et al., 2022; Zhang et al., 2022b; Mou et al., 2023; Zhou et al., 2023; Shi et al., 2023) exploit limited labeled data to guide the feature learning process for clustering.
However, these methods suffer a substantial decrease in performance in totally unsupervised scenarios. USNID (Zhang et al., 2023) proposes a novel centroid-guided mechanism with a pretraining strategy, achieving significant improvements over previous methods. Yet, USNID also falls short in handling multimodal data. See Appendix C for more related works on multi-view clustering and multimodal language analysis.
|
For the task of multimodal semantics discovery, we are provided with a multimodal intent or dialogue act dataset Dmm = {(sTi , sAi , sVi )|yi ∈ I, i = 1, ..., N}, where each ith instance si contains multimodal utterances, including sTi , audio sAi , and video s V i . Here, N represents the total number of instances. The ground-truth label yi, belonging to the set of intent or dialogue act classes Y = {yi}KYi=1, remains unseen during training and validation and is only available during testing. The number of classes is denoted by KY .
The objective is to learn a multimodal neural network F(·) capable of obtaining multimodal representations z conducive to clustering. These representations are subsequently employed to divide
the set {si}Ni=1 into KY groups. To obtain multimodal representations, we first extract deep features from text, video, and audio modalities. For text, we employ the pre-trained language model (PLM), BERT (Devlin et al., 2019) as the encoder, fine-tuning it on the text inputs sT. The initial [CLS] token embedding, xT ∈ RDT , serves as the sentence-level representation, where DT is the feature dimension of 768. We then incorporate a linear layer, represented as fT(·), yielding zT ∈ RDH . Here, H indicates a dimensionally reduced space, enhancing computational efficiency and accentuating primary features.
For non-verbal modalities, we use semantically rich features as inputs as suggested in (Saha et al., 2020; Zhang et al., 2022a). For video, we employ the Swin Transformer (Liu et al., 2021) to extract video feature representations xV ∈ RLV×DV at the frame level from the video inputs sV. Here, LV represents the video length, and DV is the feature dimension of 1024. For audio sA, we first extract audio waveforms as in (Zhang et al., 2022a) and then use the WavLM (Chen et al., 2022) to obtain features xA ∈ RLA×DA . Here, LA and DA denote the audio length and feature dimension of 768, respectively. Unsupervised multimodal clustering can benefit from these two powerful non-verbal features extracted from the Swin Transformer and WavLM models. A comparison between them and other multimodal features is shown in Appendix D.
For both audio and video modalities, initially introduce a linear layer fM (·) in alignment with the text modality. Subsequently, we apply the multi-headed attention mechanism with the Transformer (Vaswani et al., 2017) encoder, adeptly capturing intricate semantic relationships and temporal nuances. Eventually, in line with (Tsai et al., 2019), the last sequence elements are employed to derive the sentence-level representation zM :
zM = Transformer(fM (xM ))[−1], (1)
where M ∈ {A,V}, and zM ∈ RDH . Following this, we concatenate the representations zT, zA, and zV and pass them through a nonlinear fusion layer, denoted as F : R3DH → RDH . This layer is designed to learn cross-modal interactions, yielding the combined representation
zTAV ∈ RDH :
zTAV = F(Concat(zT, zA, zV)), (2)
where F is defined as W1σGELU(Dropout(·)) + b1. Here, σGELU represents the GELU activation function, and W1 and b1 are the corresponding weight and bias matrices, respectively. Subsequently, we employ zTAV and its augmentations for further clustering and representation learning. Effective pre-training strategies can provide wellinitialized representations conducive to clustering (Zhang et al., 2023). Unsupervised contrastive learning (Chen et al., 2020) has emerged as an effective approach for unsupervised clustering (Zhang et al., 2021a; Li et al., 2021). It pushes apart samples and makes them distribute uniformly in the feature space while capturing implicit similarity relations between augmentations. However, existing methods often fall short in providing effective augmentations for multimodal data. In this work, we introduce a novel method of non-verbal modality masking to address this gap.
Given the predominant role of the text modality in intent analysis, we retain it as the core modality and mask either the video or audio modality for data augmentation. For the ith sample zTAV,i in a minibatch of B samples, either the video or audio modality is replaced with zero vectors. Eq. 2 is used to derive zTA0,i and zT0V,i as positively augmented samples. For each positive pair (i, j) among the generated 3B augmented samples, we apply the multimodal unsupervised contrastive learning loss:
Lmucli,j =
− log (
exp(sim(ϕ1(zi), ϕ1(zj))/τ1)∑ k I[k ̸=i] exp(sim(ϕ1(zi), ϕ1(zk))/τ1)
) ,
(3)
where zi ∈ {zTAV,i, zTA0,i, zT0V,i}, sim(·) refers to the dot product operation on two L2-normalized vectors, and ϕ1(·) is a non-linear layer with ReLU activation, serving as the contrastive head. The parameter τ1 represents the temperature, and I[·] is the indicator function, outputting 1 if and only if j = i, and 0 otherwise.
By masking the video or audio modality with zero vectors, the model can focus on learning the implicit similarities in the shared modalities among
Step 3: Multimodal Representation Learning
Clustering
Select high-quality samples above the
threshold t
Turn to the next cluster
Step 2: Clustering and High-quality Sample Selection Selected Indexes
Iteratively perform Step 2 and 3 until t = 100%
t= t 0+Δ*iter 100 %t0
Step 1: Multimodal Unsupervised Pre-training
zV
zT
zA
zVzT 0
zT zA zV
F u sio n L ayer
zTAV
zT0V
zTA0
LogitsTAV
LogitsT0V
LogitsTA0
Unsupervised
Contrastive Learning
C o n trastiv e H ead 𝒇𝑻(·)
𝒇𝑽(·)
𝒇𝑨(·)
𝑭(·) ∅𝟏(·)
Cluster 𝐶3
𝑰𝒅𝒙′𝑪𝟏, 𝟏
𝑰𝒅𝒙′𝑪𝟏, 𝒎
···
𝑰𝒅𝒙′𝑪𝟐, 𝟏
𝑰𝒅𝒙′𝑪𝟐,𝒎
···
𝑰𝒅𝒙′𝑪𝟑,𝟏
𝑰𝒅𝒙′𝑪𝟑,𝒎
···
zTAV(𝑰𝒅𝒙′𝑪𝟏~𝟑, 𝟏~𝒎)
zT0V(𝑰𝒅𝒙′𝑪𝟏~𝟑, 𝟏~𝒎)
zTA0(𝑰𝒅𝒙′𝑪𝟏~𝟑, 𝟏~𝒎)
C o n trastiv e H ead
LogitsTAV(𝑰𝒅𝒙′𝑪𝟏~𝟑, 𝟏~𝒎)
LogitsT0V(𝑰𝒅𝒙′𝑪𝟏~𝟑, 𝟏~𝒎)
LogitsTA0(𝑰𝒅𝒙′𝑪𝟏~𝟑, 𝟏~𝒎)
Supervised
Contrastive Learning
zTAV(𝑰𝒅𝒙′𝑪𝟏~𝟑,𝒎+𝟏~𝒏)
zT0V(𝑰𝒅𝒙′𝑪𝟏~𝟑,𝒎+𝟏~𝒏)
zTA0(𝑰𝒅𝒙′𝑪𝟏~𝟑,𝒎+𝟏~𝒏)
C o n trastiv e H ead
LogitsTAV(𝑰𝒅𝒙′𝑪𝟏~𝟑,𝒎+𝟏~𝒏)
LogitsT0V(𝑰𝒅𝒙′𝑪𝟏~𝟑,𝒎+𝟏~𝒏)
LogitsTA0(𝑰𝒅𝒙′𝑪𝟏~𝟑,𝒎+𝟏~𝒏)
Unsupervised
Contrastive Learning
∅𝟐(·)
∅𝟐(·)
WavLM
Swin Transformer
I think I might be going crazy.
Text
Audio
Video
A-Transformer
PLM Encoder
V-Transformer
Linear
Linear
Linear
zT 0zA
Cluster 𝐶1 Cluster 𝐶2
Cluster 𝐶3
Cluster 𝐶1
Cluster 𝐶3
Cluster 𝐶2
positive pairs (i.e., text and video, text and audio, and text alone). This further encourages the model to capture intricate relationships and leverage complementary information across modalities. After pre-training, we employ the representations zTAV to perform clustering. Specifically, we adopt the K-Means++ algorithm (Arthur and Vassilvitskii, 2007) for this task due to its advanced initial centroid selection technique that improves convergence over standard K-Means.
However, we observe that the cluster assignments obtained directly from K-Means++ are insufficiently high-quality to guide the learning of multimodal representations. To address this, we introduce a strategy to incrementally incorporate highquality samples into the learning process. This is achieved through a curriculum-based method, where we progressively adjust the sample selection threshold, t, dictating the proportion of selected samples from each cluster for a given training iteration. The threshold t is linearly updated as follows:
t = t0 +∆ · iter, (4)
where t, t0 ∈ [0, 1], t0 is the initial threshold set to 0.1 (see Appendix E for a detailed discussion), iter is the iteration index within the epoch, and ∆ is a preset positive increment, applied after each epoch.
To further refine clustering performance, we incorporate the centroid inheritance strategy as proposed in (Zhang et al., 2023). Specifically, KMeans++ is utilized only during the first training iteration. In subsequent iterations, the cluster centroids from the previous iteration are inherited as
initial centroids. This approach effectively leverages historical clustering information to guide and improve current clustering results.
Then, we need to identify high-quality samples for representation learni g. We introduce a novel mechanism for selecting high-quality samples, as depicted in Figure 3. This mechanism comprises two main steps: density calculation and high-quality sample selection and evaluation. To discern high-quality samples within each cluster, we propose using density as the criterion. The underlying intuition is that high-quality samples are likely to exhibit high local density, whereas low-quality, anomalous, or falsely clustered data are expected to have low local density. For the ith sample, we compute its density, ρi, as the reciprocal of the average distance between zTAV,i and its top-K nearest neighbors:
ρi = Knear∑Knear j=1 dij , (5)
where Knear denotes the number of top-K nearest neighbors. dij represents the Euclidean distance between the ith sample and its jth nearest neighbor. After calculating the density of each sample in each cluster, we rank them based on their densities in descending order. Specifically, for each sample in the kth cluster Ck with a density of ρi, we compute a sorted index list IdxCk as follows:
IdxCk = argsort(−[ρ1, ρ2, · · · , ρn]), (6)
where argsort yields the indices that sort the densities in ascending order of the negative values, and n represents the number of samples in Ck. The high-quality samples are selected based on the highest densities. The number of selected highest-density samples has a proportion in cluster Ck above the threshold t. Let m = ⌊n ∗ t⌋, the chosen samples are denoted as: zTAV(IdxCk,1), . . . ,zTAV(IdxCk,m). zTAV(IdxCk,i) is the i
th selected sample feature in cluster Ck, based on the ordered density indices.
Considering that real-world data might not exhibit a uniform distribution across each class, assigning a fixed Knear to every cluster could compromise the precision of density calculations, subsequently affecting the selection of high-quality samples. To address this, we introduce an innovative method to automatically select the optimal Kknear for each cluster Ck. Initially, we provide a candidate set {Kknear,q}uq=1, uniformly sampled based on the cluster size |Ck|. Specifically, Kknear,q is defined as:
Kknear,q = ⌊|Ck| · (L+∆′ · (q − 1))⌋, (7)
where L is the lower proportion bound with the constraint of 0 ≤ L ≤ 1, ∆′ is a fixed interval, and u is the number of candidates. Then, for each candidate Kknear,q, we use Eq. 6 to compute sorted indices IdxqCk and select a subset C q k with topm samples. The quality of Cqk is gauged through the cluster cohesion metric, measuring intra-cluster similarity. The cohesion of Cqk is defined as:
coh(Cqk) = m∑
i=1
coh(Cqk,i), (8)
coh(Cqk,i) = (9)
1 m− 1 m∑
j=1,j ̸=i d(zTAV(IdxCqk,i ), zTAV(IdxCqk,j )),
where m is the previously defined number of chosen samples, d(·) represents the Euclidean distance. The cohesion score can effectively capture the feature compactness and reflect the cluster quality. The optimal selected candidate index qopt is calculated as:
qopt = argmax q
{ coh(Cqk) } . (10)
That is, the optimal Kknear,qopt is selected by the candidate with the highest cluster cohesion score.
Subsequently, we use {Kknear,qopt} KY k=1 to obtain the selected high-quality indices Idx′ = {IdxqoptCk } KY k=1 with Eq. 5 and 6. These indices are then employed to select high-quality samples for subsequent representation learning. The high-quality samples identified by the selected indices Idx′ tend to have more reliable pseudolabels, so we employ them as a guiding set to facilitate the learning of friendly representations for clustering. We aim to leverage these samples to capture high-level similarity relations between pairwise samples. To achieve this, we introduce the multimodal supervised contrastive loss:
Lmscli = −1
|P (i)| ∑
p∈P(i) log exp(sim(li, lp)/τ2)∑ j I[j ̸=i] exp(sim(li, lj)/τ2) ,
(11)
where li = ϕ2(zi), and ϕ2 is a non-linear layer with ReLU activation, consistent with Eq. 3. Here, we perform the same data augmentation techniques as in section 4.2, and li ∈ {lTAV,i, lT0V,i, lTA0,i}. τ2 denotes the temperature parameter. P (i) is the set of indices for the augmented samples that share the same classes with li. With this loss, each sample can learn not only from its respective augmentations but also learn from the clustering information derived from high-quality pseudo-labels.
Conversely, low-quality samples are prone to erroneous clustering, where dissimilar samples may be grouped into the same class. This misgrouping can disrupt the integrity of the clustering process. To mitigate this issue, we propose the application of an unsupervised contrastive loss to these samples. This loss function is designed to increase the separation between distinct low-quality samples, thereby encouraging a more uniform distribution in the feature space, as supported by (Zhang et al., 2021a). Specifically, we use Eq. 3, replacing ϕ1 with ϕ3, and apply this modified equation to the remaining samples in the training set, excluding those with selected indices Idx′.
In our approach, we sequentially apply multimodal supervised contrastive learning to highquality samples and unsupervised contrastive learning to low-quality samples. This two-step strategy is crafted to concurrently enhance multimodal representation learning and clustering process. The
training phase concludes when the sample selection threshold t (as defined in Eq. 4) reaches 100%. During the inference stage, we utilize the well-trained model to extract zTAV and subsequently employ the K-Means++ algorithm for prediction.
| null |
We conduct extensive ablation studies and show the results in Table 3. (1) w/o Step 1: Removing Step 1 results in performance drops of 11-14%, 12-15%, and 8-15% across the MIntRec, MELD-DA, and IEMOCAP-DA datasets, emphasizing the importance of our proposed non-verbal modality masking strategy in enhancing subsequent clustering.
(2) Random (Step 2): To assess the impact of our high-quality sampling strategy in Step 2, we replace it with random sampling (i.e., randomly selecting the top-t percent of samples from each cluster). This change leads to average score decreases of 3.42%, 1.22%, and 1.05% on the clustering metrics, highlighting that carefully selected highquality samples are pivotal in guiding the learning of multimodal representations.
(3) SCL (Step 3): To evaluate the two-step learning approach in Step 3, we remove the unsupervised contrastive learning loss (UCL), resulting in more significant decreases of 6-11% across all three datasets.
(4) Step 1 & other strategies (K-Means++, UCL, MSE): Since the high-quality sampling strategy (Step 2) works in conjunction with multimodal representation learning (Step 3), we experiment with alternative strategies and observe their performance. Initially, applying K-Means++ directly after Step 1 leads to dramatic drops of over 10% across all three datasets. Then, implementing UCL after Step 1 still brings noticeable decreases of 1.48%, 1.08%, and 6.17% in average clustering metric scores. Lastly, we apply a mean squared error (MSE) loss between each sample feature and its corresponding cluster centroid, resulting in extremely low performance with decreases of over 20% in ARI scores. These
ablation studies further validate the effectiveness of each component in our proposed UMC algorithm.
6.2 Effect of the Knear Selection Strategy
In Section 4.3.2, we introduce an automatic method for determining the optimal Knear for each cluster. To demonstrate its efficacy, we compare it with a fixed Knear approach, where the fixed value varies from 5 to 25 in increments of 5. We then evaluate the performance using ARI scores.
As shown in Figure 4, the automatic Knear selection strategy outperforms all fixed Knear settings, except for Knear=25 in the MELD-DA dataset, which only shows a slight decrease. However, this particular hyper-parameter shows a substantial decrease in the other two datasets. The reason is that the fixed strategy struggles with the imbalanced data distribution across clusters of varying sizes, whereas our approach adapts Knear to the unique characteristics of each cluster. Importantly, this approach obviates the need for extensive manual hyper-parameter tuning while still ensuring excellent performance. Figure 5 uses t-SNE (Maaten and Hinton, 2008) to visualize representations on the IEMOCAP-DA dataset, with additional results provided in Appendix J. SCCL exhibits substantial overlap among intent classes. CC displays more compact clusters, yet still presents implicit cluster boundaries. USNID shows clear cluster boundaries, but the different clusters are close in the feature space and difficult to discern. UMC (Text) demonstrates the most distinct cluster boundaries among text-based baselines, highlighting the robustness of the representations learned through our clustering algorithm. When incorporating non-verbal modalities, the multimodal representations learned by UMC
reveal that each cluster is both compact and wellseparated from others, verifying its efficacy. This paper introduces the multimodal semantics discovery task and proposes a novel unsupervised multimodal clustering (UMC) method to address this critical challenge. UMC effectively utilizes non-verbal modalities for semantics discovery by constructing positive multimodal data augmentations. Besides, it proposes a novel high-quality sample selection mechanism and a two-step representation learning strategy.
We conduct extensive experiments on both multimodal intent and dialogue act benchmark datasets. UMC achieves remarkable improvements of 2-6% in standard clustering metrics compared to stateof-the-art clustering algorithms. Further analyses demonstrate the effectiveness of each component and the robustness of the learned representations conducive to clustering. We believe this work makes significant progress in this area and provide a solid foundation for related research.
|
We use MIntRec, MELD-DA, and IEMOCAP-DA as benchmark datasets for the multimodal semantics discovery task. The rationale for using these datasets is that the defined intents or dialogue acts typically exhibit a variety of distinct sentence-level semantics and possess properties of uncertainty in the open world, making them suitable for discovery in unsupervised scenarios. Detailed statistics of the three datasets are presented in Table 1, with further information on dataset specifics and their splits available in Appendix F. We compare UMC with the state-of-the-art unsupervised clustering methods from both NLP and CV, as well as multimodal clustering methods. The TEXTOIR platform (Zhang et al., 2021b) is used to reproduce the methods in NLP. Detailed descriptions of the baselines are follows:
SCCL (Zhang et al., 2021a): It jointly optimizes clustering and instance-level contrastive learning losses. The learning rate is set to 3e-5.
CC (Li et al., 2021): It employs dual non-linear heads to independently optimize instance-level and cluster-level contrastive learning losses. The learning rate is set to 3e-5.
USNID (Zhang et al., 2023): It performs strong data augmentation by randomly erasing words in a sentence. It also introduces a centroid-guided clustering mechanism to construct high-quality supervised signals for representation learning .
UMC (Text): This UMC variant excludes video and audio modalities during clustering. Unlike UMC, which uses multimodal augmentations, here
we apply dropout twice to generate positive augmentations for contrastive learning.
MCN (Chen et al., 2021): It employs an online K-Means algorithm to dynamically determine cluster centers and periodically update them. However, we find its performance drops with online clustering. Thus, we modify it to perform K-Means on the full dataset to ensure optimal performance. Following (Fahad et al., 2014; Saxena et al., 2017), we use four standard clustering metrics to evaluate the clustering performance, including Normalized Mutual Information (NMI), Accuracy (ACC), Adjusted Rand Index (ARI), and Fowlkes-Mallows Index (FMI). Details can be found in Appendix G. For the text modality, we utilize the pre-trained BERT model from the Huggingface Transformers library (Wolf et al., 2020) and optimize it using the AdamW (Loshchilov and Hutter, 2019) optimizer. It is important to note that all the baselines utilize the same backbone for each of the three modalities for a fair comparison. In our experiments, the multimodal data employed for pre-training and training adhere to a consistent distribution and characteristics, and no external data is used for pre-training.
We configure the sequence lengths LT, LV, LA for MIntRec, MELD-DA, and IEMOCAP-DA datasets to (30, 230, 480), (70, 250, 520), and (44, 230, 380), respectively. The threshold t is
incremented by ∆ of 0.05. For the selection of optimal Knear, we configure L = 0.1, ∆′ = 0.02, and u = 10. The learning rates are 2e-5 and (3e-4, 2e-4, 5e-4) for pre-training and training stages of MIntRec, MELD-DA, and IEMOCAP-DA datasets. The temperature parameters τ1, τ2, and τ3 are set at 0.2, (1.4, 20, 6) and (1, 20, 6) for these datasets, respectively. A detailed hyper-parameter sensitivity analysis is provided in Appendix H. We use a training batch size of 128 and report an average performance over five random seeds of 0-4. Table 2 shows the results on the multimodal semantics discovery task. UMC (Text), a variant of our method utilizing only the text modality, exhibits comparable or superior performance to existing state-of-the-art methods across most clustering metrics. This indicates our proposed clustering method is highly effective with innovative highquality sampling and two-step representation learning strategies.
Compared to UMC (Text), UMC incorporates non-verbal modalities and demonstrates significant and consistent improvements of 1-3%, 2-4%, and 1-4% on the MIntRec, MELD-DA, and IEMOCAPDA datasets, respectively, across all clustering metrics. These results highlight the importance of non-verbal modalities and illustrate that our data augmentation strategy can effectively model multimodal interactions, thereby enhancing the learning of robust representations conducive to clustering. Remarkably, UMC achieves notable increases of 2-7%, 2-7%, and 2-8% in ARI, ACC, and FMI
across all datasets, verifying the ability of UMC to capture complex multimodal semantics in our challenging task. A case study for error analysis is detailed in Appendix I.
| null |
There are two limitations in this work. Firstly, given the complexity of real-world multimodal intent datasets, the achieved clustering performance still suggests significant potential for further improvements. Secondly, while this study establishes a foundational approach for automatically determining the Knear parameter, there is scope for exploring diverse methodologies within this automatic selection mechanism.
|
[{"authors": ["Humam Alwassel", "Dhruv Mahajan", "Bruno Korbar", "Lorenzo Torresani", "Bernard Ghanem", "Du Tran."], "title": "Self-supervised learning by cross-modal audiovideo clustering", "venue": "Proc. of NeurIPS, pages 9758\u2013 9770.", "year": 2020}, {"authors": ["David Arthur", "Sergei Vassilvitskii."], "title": "kmeans++: the advantages of careful seeding", "venue": "Proc. of SODA, pages 1027\u20131035.", "year": 2007}, {"authors": ["Alexei Baevski", "Yuhao Zhou", "Abdelrahman Mohamed", "Michael Auli"], "title": "wav2vec 2.0: A framework for self-supervised learning of speech representations", "year": 2020}, {"authors": ["Carlos Busso", "Murtaza Bulut", "Chi-Chun Lee", "Abe Kazemzadeh", "Emily Mower", "Samuel Kim", "Jeannette N Chang", "Sungbok Lee", "Shrikanth S Narayanan."], "title": "Iemocap: Interactive emotional dyadic motion capture database", "venue": "Language resources", "year": 2008}, {"authors": ["Mathilde Caron", "Piotr Bojanowski", "Armand Joulin", "Matthijs Douze."], "title": "Deep clustering for unsupervised learning of visual features", "venue": "Proc. of ECCV, pages 132\u2013149.", "year": 2018}, {"authors": ["Brian Chen", "Andrew Rouditchenko", "Kevin Duarte", "Hilde Kuehne", "Samuel Thomas", "Angie Boggust", "Rameswar Panda", "Brian Kingsbury", "Rogerio Feris", "David Harwath"], "title": "Multimodal clustering networks for self-supervised learning from unlabeled", "year": 2021}, {"authors": ["Sanyuan Chen", "Chengyi Wang", "Zhengyang Chen", "Yu Wu", "Shujie Liu", "Zhuo Chen", "Jinyu Li", "Naoyuki Kanda", "Takuya Yoshioka", "Xiong Xiao"], "title": "Wavlm: Large-scale self-supervised pre-training for full stack speech processing", "venue": "IEEE Journal of Se-", "year": 2022}, {"authors": ["Ting Chen", "Simon Kornblith", "Mohammad Norouzi", "Geoffrey Hinton."], "title": "A simple framework for contrastive learning of visual representations", "venue": "Proc. of ICML, pages 1597\u20131607.", "year": 2020}, {"authors": ["Jackie Chi Kit Cheung", "Xiao Li."], "title": "Sequence clustering and labeling for unsupervised query intent discovery", "venue": "Proc. of WSDM, pages 383\u2013392.", "year": 2012}, {"authors": ["Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova."], "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "venue": "Proc. of NAACL, pages 4171\u20134186.", "year": 2019}, {"authors": ["Adil Fahad", "Najlaa Alshatri", "Zahir Tari", "Abdullah Alamri", "Ibrahim Khalil", "Albert Y. Zomaya", "Sebti Foufou", "Abdelaziz Bouras."], "title": "A survey of clustering algorithms for big data: Taxonomy and empirical analysis", "venue": "IEEE Transactions on Emerging", "year": 2014}, {"authors": ["K Chidananda Gowda", "G Krishna."], "title": "Agglomerative clustering using the concept of mutual nearest neighbourhood", "venue": "Pattern Recognition, 10(2):105\u2013 112.", "year": 1978}, {"authors": ["Dilek Hakkani-T\u00fcr", "Yun-Cheng Ju", "Geoffrey Zweig", "Gokhan Tur."], "title": "Clustering novel intents in a conversational interaction system with semantic parsing", "venue": "Proc. of Interspeech.", "year": 2015}, {"authors": ["Wei Han", "Hui Chen", "Soujanya Poria."], "title": "Improving multimodal fusion with hierarchical mutual information maximization for multimodal sentiment analysis", "venue": "Proc. of EMNLP, pages 9180\u20139192.", "year": 2021}, {"authors": ["Iryna Haponchyk", "Antonio Uva", "Seunghak Yu", "Olga Uryupina", "Alessandro Moschitti."], "title": "Supervised clustering of questions into intents for dialog system applications", "venue": "Proc. of EMNLP, pages 2310\u2013 2321.", "year": 2018}, {"authors": ["Devamanyu Hazarika", "Roger Zimmermann", "Soujanya Poria."], "title": "Misa: Modality-invariant andspecific representations for multimodal sentiment analysis", "venue": "Proc. of ACM MM, pages 1122\u20131131.", "year": 2020}, {"authors": ["Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun."], "title": "Deep residual learning for image recognition", "venue": "Proc. of CVPR, pages 770\u2013778.", "year": 2016}, {"authors": ["Di Hu", "Feiping Nie", "Xuelong Li."], "title": "Deep multimodal clustering for unsupervised audiovisual learning", "venue": "Proc. of CVPR, pages 9248\u20139257.", "year": 2019}, {"authors": ["Guimin Hu", "Ting-En Lin", "Yi Zhao", "Guangming Lu", "Yuchuan Wu", "Yongbin Li."], "title": "UniMSE: Towards unified multimodal sentiment analysis and emotion recognition", "venue": "Proc. of EMNLP, pages 7837\u20137851.", "year": 2022}, {"authors": ["Shudong Huang", "Ivor W Tsang", "Zenglin Xu", "Jiancheng Lv", "Quanhui Liu."], "title": "Cdd: Multi-view subspace clustering via cross-view diversity detection", "venue": "Proc. of ACM MM, pages 2308\u20132316.", "year": 2021}, {"authors": ["Rajat Kumar", "Mayur Patidar", "Vaibhav Varshney", "Lovekesh Vig", "Gautam Shroff."], "title": "Intent detection and discovery from user logs via deep semisupervised contrastive clustering", "venue": "Proc. of NAACL, pages 1836\u20131853.", "year": 2022}, {"authors": ["Yunfan Li", "Peng Hu", "Zitao Liu", "Dezhong Peng", "Joey Tianyi Zhou", "Xi Peng."], "title": "Contrastive clustering", "venue": "Proc. of AAAI, pages 8547\u20138555.", "year": 2021}, {"authors": ["Ting-En Lin", "Hua Xu", "Hanlei Zhang"], "title": "Discovering new intents via constrained deep adaptive clustering with cluster refinement", "venue": "In Proc. of AAAI,", "year": 2020}, {"authors": ["Ze Liu", "Yutong Lin", "Yue Cao", "Han Hu", "Yixuan Wei", "Zheng Zhang", "Stephen Lin", "Baining Guo."], "title": "Swin transformer: Hierarchical vision transformer using shifted windows", "venue": "Proc. of ICCV, pages 10012\u201310022.", "year": 2021}, {"authors": ["Ilya Loshchilov", "Frank Hutter."], "title": "Decoupled weight decay regularization", "venue": "Proc. of ICLR.", "year": 2019}, {"authors": ["Laurens van der Maaten", "Geoffrey Hinton."], "title": "Visualizing data using t-sne", "venue": "Journal of machine learning research, 9(11):2579\u20132605.", "year": 2008}, {"authors": ["James MacQueen"], "title": "Some methods for classification and analysis of multivariate observations", "venue": "Proc. of the fifth Berkeley symposium on mathematical statistics and probability, pages 281\u2013297.", "year": 1967}, {"authors": ["Adyasha Maharana", "Quan Hung Tran", "Franck Dernoncourt", "Seunghyun Yoon", "Trung Bui", "Walter Chang", "Mohit Bansal."], "title": "Multimodal intent discovery from livestream videos", "venue": "Proc. of NAACL Findings, pages 476\u2013489.", "year": 2022}, {"authors": ["Yutao Mou", "Keqing He", "Yanan Wu", "Zhiyuan Zeng", "Hong Xu", "Huixing Jiang", "Wei Wu", "Weiran Xu."], "title": "Disentangled knowledge transfer for OOD intent discovery with unified contrastive learning", "venue": "Proc. of ACL, pages 46\u201353.", "year": 2022}, {"authors": ["Yutao Mou", "Xiaoshuai Song", "Keqing He", "Chen Zeng", "Pei Wang", "Jingang Wang", "Yunsen Xian", "Weiran Xu."], "title": "Decoupling pseudo label disambiguation and representation learning for generalized intent discovery", "venue": "Proc. of ACL, pages 9661\u20139675.", "year": 2023}, {"authors": ["Srinivas Bangalore Padmasundari", "Srinivas Bangalore."], "title": "Intent discovery through unsupervised semantic text clustering", "venue": "Proc. of Interspeech.", "year": 2018}, {"authors": ["Xi Peng", "Zhenyu Huang", "Jiancheng Lv", "Hongyuan Zhu", "Joey Tianyi Zhou."], "title": "Comic: Multi-view clustering without parameter selection", "venue": "Proc. of ICML, pages 5092\u20135101.", "year": 2019}, {"authors": ["Soujanya Poria", "Devamanyu Hazarika", "Navonil Majumder", "Gautam Naik", "Erik Cambria", "Rada Mihalcea."], "title": "MELD: A multimodal multi-party dataset for emotion recognition in conversations", "venue": "Proc. of ACL, pages 527\u2013536.", "year": 2019}, {"authors": ["Wasifur Rahman", "Md Kamrul Hasan", "Sangwu Lee", "Amir Zadeh", "Chengfeng Mao", "Louis-Philippe Morency", "Ehsan Hoque."], "title": "Integrating multimodal information in large pretrained transformers", "venue": "Proc. of ACL, pages 2359\u20132369.", "year": 2020}, {"authors": ["Tulika Saha", "Dhawal Gupta", "Sriparna Saha", "Pushpak Bhattacharyya."], "title": "Emotion aided dialogue act classification for task-independent conversations in a multi-modal framework", "venue": "Cognitive Computation, 13:277\u2013289.", "year": 2021}, {"authors": ["Tulika Saha", "Aditya Prakash Patra", "Sriparna Saha", "Pushpak Bhattacharyya."], "title": "Towards emotionaided multi-modal dialogue act classification", "venue": "Proc. of ACL, pages 4361\u20134372.", "year": 2020}, {"authors": ["Tulika Saha", "Apoorva Upadhyaya", "Sriparna Saha", "Pushpak Bhattacharyya."], "title": "Towards sentiment and emotion aided multi-modal speech act classification in Twitter", "venue": "Proc. of NAACL, pages 5727\u20135737.", "year": 2021}, {"authors": ["Amit Saxena", "Mukesh Prasad", "Akshansh Gupta", "Neha Bharill", "Om Prakash Patel", "Aruna Tiwari", "Meng Joo Er", "Weiping Ding", "Chin-Teng Lin."], "title": "A review of clustering techniques and developments", "venue": "Neurocomputing, pages 664\u2013681.", "year": 2017}, {"authors": ["Tao Shi", "Shao-Lun Huang."], "title": "MultiEMO: An attention-based correlation-aware multimodal fusion framework for emotion recognition in conversations", "venue": "Proc. of ACL, pages 14752\u201314766.", "year": 2023}, {"authors": ["Wenkai Shi", "Wenbin An", "Feng Tian", "Qinghua Zheng", "QianYing Wang", "Ping Chen."], "title": "A diffusion weighted graph framework for new intent discovery", "venue": "Proc. of EMNLP, pages 8033\u20138042.", "year": 2023}, {"authors": ["Mengjing Sun", "Pei Zhang", "Siwei Wang", "Sihang Zhou", "Wenxuan Tu", "Xinwang Liu", "En Zhu", "Changjian Wang."], "title": "Scalable multi-view subspace clustering with unified anchors", "venue": "Proc. of ACM MM, pages 3528\u20133536.", "year": 2021}, {"authors": ["Zhongkai Sun", "Prathusha Sarma", "William Sethares", "Yingyu Liang."], "title": "Learning relationships between text, audio, and video via deep canonical correlation for multimodal language analysis", "venue": "Proc. of AAAI, pages 8992\u20138999.", "year": 2020}, {"authors": ["Yao-Hung Hubert Tsai", "Shaojie Bai", "Paul Pu Liang", "J Zico Kolter", "Louis-Philippe Morency", "Ruslan Salakhutdinov."], "title": "Multimodal transformer for unaligned multimodal language sequences", "venue": "Proc. of ACL, pages 6558\u20136569.", "year": 2019}, {"authors": ["Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "\u0141ukasz Kaiser", "Illia Polosukhin."], "title": "Attention is all you need", "venue": "Proc. of NeurIPS, pages 6000\u20136010.", "year": 2017}, {"authors": ["Pascal Vincent", "Hugo Larochelle", "Isabelle Lajoie", "Yoshua Bengio", "Pierre-Antoine Manzagol", "L\u00e9on Bottou."], "title": "Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion", "venue": "Journal of Machine", "year": 2010}, {"authors": ["Yiwei Wei", "Shaozu Yuan", "Ruosong Yang", "Lei Shen", "Zhangmeizhi Li", "Longbiao Wang", "Meng Chen."], "title": "Tackling modality heterogeneity with multiview calibration network for multimodal sentiment detection", "venue": "Proc. of ACL, pages 5240\u20135252.", "year": 2023}, {"authors": ["Thomas Wolf", "Lysandre Debut", "Victor Sanh", "Julien Chaumond", "Clement Delangue", "Anthony Moi", "Pierric Cistac", "Tim Rault", "R\u00e9mi Louf", "Morgan Funtowicz"], "title": "Transformers: State-of-the-art natural", "year": 2020}, {"authors": ["Junyuan Xie", "Ross Girshick", "Ali Farhadi"], "title": "Unsupervised deep embedding for clustering analysis", "venue": "In Proc. of ICML,", "year": 2016}, {"authors": ["Bo Yang", "Xiao Fu", "Nicholas D Sidiropoulos", "Mingyi Hong."], "title": "Towards k-means-friendly spaces: Simultaneous deep learning and clustering", "venue": "Proc. of ICML, pages 3861\u20133870.", "year": 2017}, {"authors": ["Jiuding Yang", "Yakun Yu", "Di Niu", "Weidong Guo", "Yu Xu."], "title": "ConFEDE: Contrastive feature decomposition for multimodal sentiment analysis", "venue": "Proc. of ACL, pages 7617\u20137630.", "year": 2023}, {"authors": ["Wenmeng Yu", "Hua Xu", "Fanyang Meng", "Yilin Zhu", "Yixiao Ma", "Jiele Wu", "Jiyun Zou", "Kaicheng Yang."], "title": "Ch-sims: A chinese multimodal sentiment analysis dataset with fine-grained annotation of modality", "venue": "Proc. of ACL, pages 3718\u20133727.", "year": 2020}, {"authors": ["Wenmeng Yu", "Hua Xu", "Ziqi Yuan", "Jiele Wu."], "title": "Learning modality-specific representations with selfsupervised multi-task learning for multimodal sentiment analysis", "venue": "Proc. of AAAI, pages 10790\u201310797.", "year": 2021}, {"authors": ["Amir Zadeh", "Rowan Zellers", "Eli Pincus", "LouisPhilippe Morency."], "title": "Mosi: multimodal corpus of sentiment intensity and subjectivity analysis in online opinion videos", "venue": "arXiv preprint arXiv:1606.06259.", "year": 2016}, {"authors": ["AmirAli Bagher Zadeh", "Paul Pu Liang", "Soujanya Poria", "Erik Cambria", "Louis-Philippe Morency."], "title": "Multimodal language analysis in the wild: Cmumosei dataset and interpretable dynamic fusion graph", "venue": "Proc. of ACL, pages 2236\u20132246.", "year": 2018}, {"authors": ["Dejiao Zhang", "Feng Nan", "Xiaokai Wei", "Shang-Wen Li", "Henghui Zhu", "Kathleen R McKeown", "Ramesh Nallapati", "Andrew O Arnold", "Bing Xiang."], "title": "Supporting clustering with contrastive learning", "venue": "Proc. of NAACL, pages 5419\u20135430.", "year": 2021}, {"authors": ["Hanlei Zhang", "Xiaoteng Li", "Hua Xu", "Panpan Zhang", "Kang Zhao", "Kai Gao."], "title": "TEXTOIR: An integrated and visualized platform for text open intent recognition", "venue": "Proc. of ACL, pages 167\u2013174.", "year": 2021}, {"authors": ["Hanlei Zhang", "Xin Wang", "Hua Xu", "Qianrui Zhou", "Kai Gao", "Jianhua Su", "jinyue Zhao", "Wenrui Li", "Yanting Chen"], "title": "MIntrec 2.0: A large-scale benchmark dataset for multimodal intent recognition and out-of-scope detection in conversations", "year": 2024}, {"authors": ["Hanlei Zhang", "Hua Xu", "Ting-En Lin", "Rui Lyu."], "title": "Discovering new intents with deep aligned clustering", "venue": "Proc. of AAAI, pages 14365\u201314373.", "year": 2021}, {"authors": ["Hanlei Zhang", "Hua Xu", "Xin Wang", "Fei Long", "Kai Gao."], "title": "A clustering framework for unsupervised and semi-supervised new intent discovery", "venue": "IEEE Transactions on Knowledge and Data Engineering.", "year": 2023}, {"authors": ["Hanlei Zhang", "Hua Xu", "Xin Wang", "Qianrui Zhou", "Shaojie Zhao", "Jiayan Teng."], "title": "Mintrec: A new dataset for multimodal intent recognition", "venue": "Proc. of ACM MM, pages 1688\u20131697.", "year": 2022}, {"authors": ["Yi Zhang", "Xinwang Liu", "Siwei Wang", "Jiyuan Liu", "Sisi Dai", "En Zhu."], "title": "One-stage incomplete multiview clustering via late fusion", "venue": "Proc. of ACM MM, pages 2717\u20132725.", "year": 2021}, {"authors": ["Yuwei Zhang", "Haode Zhang", "Li-Ming Zhan", "Xiao-Ming Wu", "Albert Lam."], "title": "New intent discovery with pre-training and contrastive learning", "venue": "Proc. of ACL, pages 256\u2013269.", "year": 2022}, {"authors": ["Qianrui Zhou", "Hua Xu", "Hao Li", "Hanlei Zhang", "Xiaohan Zhang", "Yifan Wang", "Kai Gao."], "title": "Token-level contrastive learning with modality-aware prompting for multimodal intent recognition", "venue": "Proc. of AAAI, pages 17114\u201317122.", "year": 2024}, {"authors": ["Yunhua Zhou", "Guofeng Quan", "Xipeng Qiu."], "title": "A probabilistic framework for discovering new intents", "venue": "Proc. of ACL, pages 3771\u20133784.", "year": 2023}]
|
acl_23
|
MAGE: Machine-generated Text Detection in the Wild
|
[{"affiliations": [], "name": "Yafu Li"}, {"affiliations": [], "name": "Qintong Li"}, {"affiliations": [], "name": "Leyang Cui"}, {"affiliations": [], "name": "Wei Bi"}, {"affiliations": [], "name": "Zhilin Wang"}, {"affiliations": [], "name": "Longyue Wang"}, {"affiliations": [], "name": "Linyi Yang"}, {"affiliations": [], "name": "Shuming Shi"}, {"affiliations": [], "name": "Yue Zhang"}]
|
SP:6952f84c7a41e5cfab4df45f691ddcec2e513811
|
Large language models (LLMs) have achieved human-level text generation, emphasizing the need for effective AI-generated text detection to mitigate risks like the spread of fake news and plagiarism. Existing research has been constrained by evaluating detection methods on specific domains or particular language models. In practical scenarios, however, the detector faces texts from various domains or LLMs without knowing their sources. To this end, we build a comprehensive testbed by gathering texts from diverse human writings and texts generated by different LLMs. Empirical results show challenges in distinguishing machine-generated texts from human-authored ones across various scenarios, especially outof-distribution. These challenges are due to the decreasing linguistic distinctions between the two sources. Despite challenges, the topperforming detector can identify 86.54% outof-domain texts generated by a new LLM, indicating the feasibility for application scenarios.
|
1 introduction :With constant advancements in artificial intelligence generated content (AIGC) technology (Rombach et al., 2022; Zhang and Agrawala, 2023; Shi et al., 2023; Brown et al., 2020; OpenAI, 2023b), texts generated by large language models (LLMs) (Brown et al., 2020; OpenAI, 2023b; Touvron et al., 2023; Taori et al., 2023) have reached a level comparable to that of human peers, enabling the generation of remarkably fluent and meaningful responses to various user queries.
Advanced LLMs have become prevalent in enhancing human life and productivity. Nevertheless, they can also be employed for purposes such as
* Work was conducted during the internships of Yafu Li and Qintong Li at Tencent AI Lab.
† Corresponding authors.
manipulating public opinion, spreading fake news, and facilitating student plagiarism. To this end, researchers have recently been putting efforts into differentiating between texts written by humans and those generated by machines (Pu et al., 2022; Guo et al., 2023; Zhao et al., 2023; Mitchell et al., 2023). However, these findings are limited to testbeds of specific domains (Pu et al., 2022) or deepfake texts from certain models (Guo et al., 2023), or they assume the accessibility of the source LLMs (Zhao et al., 2023; Mitchell et al., 2023). Within a specific domain (e.g., BBC News), it can be easy to identify texts generated by a certain model (e.g., ChatGPT) from human writings (Pu et al., 2022; Mitchell et al., 2023).
In practice, however, a machine-generated text detector may encounter fake news from various LLMs without knowing their sources, as depicted in Figure 1. The detector can also face ChatGPTgenerated student assignments across different tasks such as story generation, question answering, and scientific writing. As the detector encounters increasingly diverse texts from both human-written and machine-generated sources, it has fewer surface patterns or linguistic differences to rely on. In a more demanding scenario, the detector must identify texts from unfamiliar domains or those
generated by unseen LLMs. In this study, we try to address the following research questions: (1) Can existing detection methods effectively distinguish texts generated by diverse LLMs for various writing tasks in real-world scenarios? (2) Are there inherent distinctions between human-written texts and machine-generated texts in an open-domain setting, irrespective of their topic or content?
To this end, we build a large-scale testbed, MAGE, for MAchine-GEnerated text detection, by collecting human-written texts from 7 distinct writing tasks (e.g., story generation, news writing and scientific writing) and generating corresponding machine-generated texts with 27 LLMs (e.g., ChatGPT, LLaMA, and Bloom) under 3 representative prompt types. We categorize the data into 8 testbeds, each exhibiting progressively higher levels of “wildness” in terms of distributional variance and detection complexity. Initially, we detect texts generated by a white-box LLM within a specific domain. Subsequently, we enhance the complexity by incorporating texts generated by additional LLMs across various writing tasks. The most challenging testbed necessitates the detector’s ability to identify out-of-domain texts generated by newly developed LLMs and perform detection against paraphrasing attacks.
We evaluate 4 commonly employed detection methods, encompassing both supervised and unsupervised approaches, on our proposed testbeds. Empirical results indicate that all detection methods are effective in identifying machine-generated texts from a single domain or generated by a limited range of LLMs. However, as the diversity of domains and models increases, except for the PLMbased detector, all other methods experience significant performance deterioration. The challenge intensifies with out-of-distribution (OOD) testbeds, where even the best-performing detector misclassifies 61.95% of human-written texts from unseen domains. The suboptimal OOD performance can be effectively mitigated by leveraging a mere 0.1% of in-domain data, resulting in over 80% recall for identifying out-of-domain texts generated by previously unencountered LLMs. This demonstrates the feasibility of machine-generated text detection in real-world scenarios.
Finally, we investigate potential differences between human texts and machine generations that can be utilized for detection. Statistical findings demonstrate that while significant linguistic differences exist within a particular domain, they gradu-
ally converge as more texts from diverse domains and language models are included. Moreover, empirical results demonstrate that perplexity can serve as a fundamental feature for clustering the two sources of text. It is applicable to distinguishing between human and machine compositions in general, regardless of the text domain or the language model used for generation. We release our resources at https://github.com/yafuly/MAGE.
|
A line of work explores the linguistic patterns to achieve automatic machine-writing detection, which has gone through n-gram frequencies (Badaskar et al., 2008), entropy (Lavergne et al., 2008; Gehrmann et al., 2019), perplexity (Beresneva, 2016), and negative curvature regions of the model’s log probability (Mitchell et al., 2023; Bao et al., 2023). One limitation of these statistics-based methods is the white-box assumption that we can access the model prediction distributions, hindering wider applications on models behind APIs, such as ChatGPT. Another alternative paradigm is training neural-based detectors (Bakhtin et al., 2019; Fagni et al., 2021; Uchendu et al., 2020; OpenAI, 2023a). Some works (Meral et al., 2009; Krishna et al., 2023; Zhao et al., 2023; Kirchenbauer et al., 2023) explore the potential of watermarks in language models, making model-generated texts easier to detect. Liang et al. (2023) indicate that texts by non-native speakers are more likely to be incorrectly identified as AI-generated. Our work does not assume language models are enhanced with watermarks, instead considering a more common detection setting where we do not know the sources of detected texts.
Current AI text detection has not achieved significant success, as evidenced by the successful exploits of paraphrasers that expose weaknesses in existing detectors (Sadasivan et al., 2023; Krishna et al., 2023), raising concerns about the robustness of current detection methods. On the other hand, most of the detectors focus on specific domains, such as news (Zellers et al., 2019b; Zhong et al., 2020) and reviews (Chakraborty et al., 2023), or specific models (Pu et al., 2022; Rodriguez et al., 2022; Mitchell et al., 2023). The transferability of detection capabilities to out-of-distribution scenarios, involving texts from unseen domains or models, remains uncertain and represents a crucial practi-
cal challenge. To address this issue, we examine a scenario where texts from various domains generated by different language models are combined and extended to out-of-distribution settings with consideration for paraphrasing attacks.
|
A detection system labels a text as either machinegenerated or human-written, or outputs a probability distribution. In this work, we consider a set of commonly used detection methods. To showcase detection difficulty, we first consider naive baselines, i.e., human detection and ask ChatGPT, by asking human and query ChatGPT to identify the text source. For supervised methods, we choose the PLM-based classifier, which is commonly used in text detection (Rodriguez et al., 2022; Pu et al., 2022). We report the performance of Longformer (Beltagy et al., 2020) in the remainder of the paper, as it outperforms other commonly used PLMs, such as BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019), and GPT-2 (Radford et al., 2019). Detailed comparisons can be found in Appendix E. GLTR (Gehrmann et al., 2019) is also included to represent methods that leverage modelbased features. In addition, we include FastText (Joulin et al., 2017), which uses linguistic statistics as features. For unsupervised detection, we consider DetectGPT (Mitchell et al., 2023) to study the robustness of zero-shot detectors, which can also serve as a representative method that requires access to the text-generation LLM. Implementation details are shown in Appendix C.
|
Data Sourcing. We collect human-written texts from a set of benchmark datasets, which cover diverse writing tasks including: (1) Opinion statement: 804 opinion statements from the /r/ChangeMyView (CMV) Reddit subcommunity (Tan et al., 2016) and 1,000 reviews from Yelp dataset (Zhang et al., 2015); (2) News article writing: 1,000 news articles from XSum (Narayan et al., 2018) and 777 news articles from TLDR_news*(TLDR); (3) Question answering: 1,000 answers from the ELI5 dataset (Fan et al., 2019); (4) Story generation: 1,000 prompted stories from the Reddit WritingPrompts (WP) dataset (Fan et al., 2018) and 1,000 stories from ROCStories Corpora (ROC) (Mostafazadeh et al., 2016); (5) Commonsense reasoning: 1,000 sentence sets for reasoning from HellaSwag (Zellers et al., 2019a); (6) Knowledge illustration: 1,000 Wikipedia paragraphs from SQuAD contexts (Rajpurkar et al., 2016); (7) Scientific writing: 1,000 abstracts of scientific articles from SciXGen (Chen et al., 2021a).
Model sets. We aim to adopt a wide spectrum of representative large language models (LLMs) to construct machine-generated texts. In particular, we consider 27 LLMs in this work: OpenAI GPT (text-davinci-002/textdavinci-003/gpt-turbo-3.5) (Brown et al., 2020), LLaMA (6B/13B/30B/65B) (Touvron et al., 2023), GLM-130B (Zeng et al., 2022), FLAN-T5 (small/base/large/xl/xxl) (Chung et al., 2022), OPT (125M/350M/1.3B/2.7B/6.7B/13B/30B/iml1.3B/iml-30B) (Zhang et al., 2022a), BigScience (T0-3B/T0-11B/BLOOM-7B1) (Sanh et al., 2022; BigScience, 2023) and EleutherAI (GPT-J-6B and GPT-NeoX-20B) (Wang and Komatsuzaki, 2021; Black et al., 2022).
Prompts. To generate machine-generated text for each instance in the collected data, we use three types of prompts to feed the LLMs: (1) continuation prompts: ask LLMs to continue generation based on the previous 30 words of the original human-written text; (2) topical prompts: as
*https://huggingface.co/datasets/JulesBelveze/TLDR_news
LLMs to generate texts based on a topic (e.g., argument, news title, story topic, etc.) and (3) specified prompts: topical prompts with specified information about the text sources (e.g., BBC news, Reddit Post, etc.). The topical and specified topical prompts are designed for OpenAI models, as they can respond to such prompts robustly. We present several prompt examples in Appendix A.
In summary, for each human-written text, we generate a set of machine-generated texts using 27 LLMs with 3 different prompts. Data construction details and statistics are presented in Appendix B.
|
We proposed a comprehensive testbed for machinegenerated text detection, by gathering texts from various writing tasks and machine-generated texts generated by different LLMs. Empirical results on commonly used detection methods demonstrated the challenge of AI-generated text detection. Outof-distribution posed a greater challenge for detectors to be employed in application scenarios. With the boundary refined, the best-performing detector on our testbeds (i.e., Longformer detector) achieved 86.54% AvgRec on out-of-domain texts generated by a new LLM, i.e., GPT4. By studying differences between human and machine compositions, we find that perplexity can serve as a fundamental feature for classification regardless of text domain or generation LLM. To the best of our knowledge, this is the first study to investigate the challenges and feasibility of AI-generated text detection in a "wild" testbed.
Limitations
Although we are the first to propose a comprehensive testbed for AI-generated text detection and validate the detection effectiveness on frontier test sets, there are two major limitations: (1) We strive to include a wide variety of LLMs in our dataset. However, new LLMs such as Alpaca (Taori et al., 2023) and Vicuna (Chiang et al., 2023) continue to emerge and may not be currently included. Nevertheless, our dataset aims to serve as a testbed to select the best-performing detectors, which encounter sufficiently diverse machine-generated texts and can deal with texts from newly-developed LLMs in future. (2) We adopt benchmark datasets as text sources, which can be used as the training data for LLM pretraining. The detection capability may vary on new online texts that were not included in the LLMs’ pretraining data. In the future, we plan to gather new online texts that have not been previously seen by LLMs to study such variation.
Ethics Statement
We honor the Code of Ethics. No private data or non-public information is used in this work. For human annotation (Section 6.1), we recruited our annotators from the linguistics departments of local universities through public advertisement with a specified pay rate. All of our annotators are senior undergraduate students or graduate students in linguistic majors who took this annotation as a part-time job. We pay them 60 CNY an hour. The local minimum salary in the year 2023 is 25.3 CNY per hour for part-time jobs. The annotation does not involve any personally sensitive information.
|
We consider each benchmark dataset as separate domains, such as CMV, XSum, SciXGen, etc. We group the LLMs into 7 sets based on their source: OpenAI GPT set, LLaMA set, GLM-130B set, FLAT-T5 set, OPT set, BigScience set, and EleutherAI set. To investigate whether machinegenerated text can be distinguished from humanwritten text, we categorize the collected data into
8 settings. These settings are determined by the sources of training and evaluation data and increase in detection difficulty. The simplest setting involves detecting within-domain white-box detection while the most challenging setting involves detecting against paraphrasing attack.
We first consider in-distribution settings, where the detection method is evaluated on texts from seen domains and model sets, i.e., the training and test data are from the same data source.
Testbed 1: Fixed-domain & Model-specific. Human-written texts come from a single domain and machine-generated texts are generated by a specific LLM (GPT-J-6B). A classifier is trained for each of the 10 domains, and the weighted average performance is reported. In this setting, we use only GPT-J-6B to generate fake texts instead of the entire model set from EleutherAI, aiming to simulate white-box detection, i.e., accessibility to the text-generating LLM, which is crucial for detection methods such as DetectGPT.
Testbed 2: Arbitrary-domains & Model–specific. Human-written texts are obtained from combining all 10 domains, while machine-generated texts are produced by a single model set, creating 7 independent testbeds for each model set. We train 7 classifiers accordingly and report weighted average performance.
Testbed 3: Fixed-domain & Arbitrary-models. Similarly, we include human-written texts from a single domain and obtain machine-generated using all model sets. In this way, we create 10 independent testbeds for each domain and train 10 classifiers accordingly.
Testbed 4: Arbitrary-domains & Arbitrarymodels. Human-written texts are from all domains with machine-generated texts generated using all model sets, which creates an integral testbed covering the full range of data. We train a general classifier and report its performance.
Furthermore, we consider four out-ofdistribution settings where the detection model is tested on texts from unseen domains or unseen models.
Testbed 5: Unseen Models. This setting evaluates whether the classifier can detect texts from unseen models. In this setting, texts generated by a specific model set are excluded from the training
data. The classifier is then trained on the remaining texts and tested on the excluded ones. This process creates 7 testbeds for cross-validation. We train 7 classifiers for each testbed and report their weighted average performance.
Testbed 6: Unseen Domains. This setting evaluates whether the classifier can detect texts from unseen domains. In this setting, texts from a specific domain are excluded from the training data. The classifier is then trained on the remaining texts and tested on the excluded one. This process creates 10 testbeds for cross-validation. We train 10 classifiers for each testbed and report weighted average performance.
Testbed 7: Unseen-domains & Unseen-model. We go one step “wilder” by constructing an additional test set with texts from unseen domains generated by an unseen model, to test the detection ability in more practical scenarios. We consider four new datasets: CNN/DailyMail (See et al., 2017), DialogSum (Chen et al., 2021b), PubMedQA (Jin et al., 2019) and IMDb (Maas et al., 2011) to test the detection of machine-generated news, dialogues, scientific answers and movie reviews. We sample 200 instances from each dataset and use a newly developed LLM, i.e., GPT-4 (OpenAI, 2023b), with specially designed prompts (Appendix A) to create machine-generated texts.
Testbed 8: Paraphrasing Attack. Sadasivan et al. (2023) show that detection methods are vulnerable to being deceived by paraphrased target texts. Based on the Unseen Domains & Unseen Model test set, we paraphrase each sentence individually for both human-written and machinegenerated texts, forming a more challenging test set. We treat paraphrases from both sources as machine-generated. We adopt gpt-3.5-turbo as the paraphraser and consider all paraphrased texts as machine-generated. We report AUROC (the area under the receiver operating characteristic curve), which quantifies the classifier’s potential of distinguishing between the positive and negative classes. An AUROC of 1.0 corresponds to a perfect classifier, whereas 0.5 represents random guessing. Following Nakov et al. (2013), we also consider AvgRec (average recall), which is calculated by averaging the recall scores on human-written texts (HumanRec) and
machine-generated texts (MachineRec) †. These recall scores help us assess the realistic detection performance. For instance, black-box detection methods like human detection and ask ChatGPT cannot be evaluated using AUROC. Furthermore, determining a decision boundary based on a reliable validation set is challenging in an open-domain detection setting. Table 1 shows that both ChatGPT and human annotators fail to distinguish machine-generated texts from human-written ones. The AvgRec is only slightly better than random guessing, suggesting that machine-generated texts have achieved a level (e.g., fluency and coherence) comparable to those of humans. We then explore whether there exist underlying differences that can be captured by automatic detection methods. The results of in-domain detection are shown in Table 2 and the upper part of Table 3.
White-box Detection. From Table 2, we can observe that all detection methods obtain solid performance when the texts are from a specific domain and a specific LLM (GPT-J-6B) (i.e., Fixed-domain & Model-specific). Typically, DetectGPT performs well in identifying machine-generated texts when the scoring model matches the one used to generate
†Since our test sets are balanced, the precision score heavily relies on and can be reflected by the recall score. Therefore, we choose to report only the recall scores for a more intuitive evaluation.
the fake texts, i.e., accessibility to the generation LLM in the white-box setting.
PLM-based Detectors demonstrate robustness to texts from various sources. As shown in Table 3, the detection performance (AvgRec and AUROC) decreases as the detector encounters broader data sources, i.e., texts from various domains or various LLMs. For example, GLTR’s AUROC drops from 0.94 to 0.80 and DetectGPT’s drops from 0.92 to 0.57 when encountering texts from multiple models (Arbitrary-models). The severe performance drop of DetectGPT is attributed to its reliance on accessibility to the generation LLMs (Mitchell et al., 2023). On the other hand, FastText faces significant challenges in detecting texts from various domains (Arbitrary-domains), despite its robustness on texts sourced by different language models. Among all detection methods, the Longformer detector consistently outperforms others in terms of AUROC and AvgRec. Despite the minor performance degradation, Longformer surpasses other detectors by a considerable margin in the Arbitrary-domains & Arbitrary-models setting, where the detector encounters diverse texts from various domains and language models. We further investigate whether the detection model can identify machine-generated texts in out-ofdistribution settings, i.e., detect texts from unseen domains or generated by new LLMs. The results are presented in the lower part of Table 3. Empirical results indicate that, except for the Longformer detector, all other detectors perform poorly in identifying texts generated by unseen models. Furthermore, none of the detectors effectively classify texts from novel domains.
Unseen Models. Among all methods, the Longformer detector is the only one that performs well (with an AUROC of 0.95 and AvgRec of 86.61%) when detecting texts from unseen LLMs. The performance of FastText further degrades, with AUROC dropping from 0.83 to 0.74. GLTR faces a significant challenge when it comes to unseen models. Its AUROC of 0.65 suggests that it struggles to differentiate between different text sources. The detection performance (Longformer) on each unseen model set is shown in Figure 2. The Longformer classifier has the most difficulty distinguishing texts generated by the OpenAI and FLAN-T5 models from human-written ones. By comparison, the de-
tector can identify most of the machine-generated texts from other models, even if it has not encountered any of them during training. On the other hand, the difficulty of detection is influenced by the prompt types used for model generation. Texts generated from specific prompts (OpenAI(s)) are harder to distinguish than continuation prompts (OpenAI(c)) and topical prompts (OpenAI(t)). This can be because they follow a detailed prompt condition, making them more similar to human-written
texts.
Unseen Domains. Detecting texts from unseen domains presents a heightened challenge for classifiers. Notably, even the top-performing model, Longformer, experiences a substantial decline in AvgRec, dropping from 90.53% to 68.40%. Typically, Longformer tends to classify humanwritten texts from unfamiliar domains as machinegenerated, which results in a low HumanRec score but an almost perfect MachineRec. We present detection performance (Longformer) on each unseen domain in Figure 3. The top three text do-
0.0 0.2 0.4 0.6 0.8 1.0 Recall
0.0
0.2
0.4
0.6
0.8 1.0 Pr ec isi on Default Boundary
Refined Boundary
(a) Precision-Recall curve of the Longformer detector on the unseen domain (Yelp). A refined decision boundary obtains a better trade-off between precision and recall.
0 0.01% 0.1% 1% 10% Ratio of In-Domain Data
0.5
0.6
0.7
0.8
Av gR
ec
Xsum ROC Yelp CMV
(b) Detection performance in the "Unseen Domains" setting (Xsum, ROC, Yelp and CMV) with decision boundary adjusted based on different ratios of in-domain data.
mains most likely to be misclassified as machinegenerated are ROC, XSum, and TLDR datasets. This could be attributed to their low average perplexity scores which confuse PLM-based detectors (discussed in Section 7.2).
Boundary Adjustment. Despite the low AvgRec in the Unseen Domains setting, Longformer achieves a high AUROC score (0.93). This suggests that the model can distinguish between the two classes but struggles with selecting an appropriate decision boundary, as shown in Figure 4a. To address this issue, we utilize a portion of the in-domain data from the training set to adjust the decision boundary. We compute an average decision boundary across 10 classifiers (in the Unseen Domains setting) and apply it universally across all domains. As depicted in Figure 4b, refining the decision boundary with only 0.1% of in-domain data (e.g., 4 instances for CMV) significantly enhances detection performance. Table 4 demonstrates that adjusting the decision boundary (using 0.1% of indomain data) notably improves detection accuracy for both out-of-distribution settings.
Unseen Domains & Unseen Model We validate the detection ability of Longformer, the bestperforming detector, on the Unseen Domains & Unseen Model testbed. The results are presented in Table 5. The Longformer detector trained us-
ing our dataset achieves a high performance (0.94 AUROC) in detecting texts generated by GPT-4, even when sourced from newly added datasets and generated by a new LLM. After refining the boundary, the detector demonstrates balanced accuracy in detecting both text sources, resulting in an AvgRec of 86.54%. This showcases its feasibility for deployment in real-world scenarios.
Paraphrasing Attack However, similar to other methods (Krishna et al., 2023), the Longformer detector also shows vulnerability to paraphrasing attacks, as shown in Table 5. The AUROC drops from 0.94 to 0.75 when the detector encounters additional paraphrased texts, which can be attributed to the shifted perplexity distribution of paraphrased texts (Section 7.2).
|
Nevertheless, our dataset aims to serve as a testbed to select the best-performing detectors, which encounter sufficiently diverse machine-generated texts and can deal with texts from newly-developed LLMs in future. In the future, we plan to gather new online texts that have not been previously seen by LLMs to study such variation
|
Although we are the first to propose a comprehensive testbed for AI-generated text detection and validate the detection effectiveness on frontier test sets, there are two major limitations
|
[{"authors": ["Sameer Badaskar", "Sachin Agarwal", "Shilpa Arora."], "title": "Identifying real or fake articles: Towards better language modeling", "venue": "Proceedings of the Third International Joint Conference on Natural Language Processing: Volume-II.", "year": 2008}, {"authors": ["Anton Bakhtin", "Sam Gross", "Myle Ott", "Yuntian Deng", "Marc\u2019Aurelio Ranzato", "Arthur Szlam"], "title": "Real or fake? learning to discriminate machine from human generated text", "venue": "ArXiv preprint,", "year": 2019}, {"authors": ["Guangsheng Bao", "Yanbin Zhao", "Zhiyang Teng", "Linyi Yang", "Yue Zhang"], "title": "Fast-detectgpt: Efficient zero-shot detection of machine-generated text via conditional probability curvature", "year": 2023}, {"authors": ["Francesco Barbieri", "Luis Espinosa Anke", "Jose Camacho-Collados."], "title": "XLM-T: Multilingual language models in Twitter for sentiment analysis and beyond", "venue": "Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages", "year": 2022}, {"authors": ["Iz Beltagy", "Matthew E. Peters", "Arman Cohan."], "title": "Longformer: The long-document transformer", "venue": "ArXiv preprint, abs/2004.05150.", "year": 2020}, {"authors": ["Daria Beresneva."], "title": "Computer-generated text detection using machine learning: A systematic review", "venue": "Natural Language Processing and Information Systems: 21st International Conference on Applications of Natural Language to Information Systems, NLDB", "year": 2016}, {"authors": ["Samuel Weinbach."], "title": "GPT-NeoX-20B: An opensource autoregressive language model", "venue": "Proceedings of BigScience Episode #5 \u2013 Workshop on Challenges & Perspectives in Creating Large Language Models, pages 95\u2013136, virtual+Dublin. Association", "year": 2022}, {"authors": ["Souradip Chakraborty", "Amrit Singh Bedi", "Sicheng Zhu", "Bang An", "Dinesh Manocha", "Furong Huang."], "title": "On the possibilities of ai-generated text detection", "venue": "ArXiv preprint, abs/2304.04736.", "year": 2023}, {"authors": ["Hong Chen", "Hiroya Takamura", "Hideki Nakayama."], "title": "SciXGen: A scientific paper dataset for context-aware text generation", "venue": "Findings of the Association for Computational Linguistics: EMNLP 2021, pages 1483\u20131492, Punta Cana, Dominican Re-", "year": 2021}, {"authors": ["Yulong Chen", "Yang Liu", "Liang Chen", "Yue Zhang."], "title": "DialogSum: A real-life scenario dialogue summarization dataset", "venue": "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 5062\u20135074, Online. Association for", "year": 2021}, {"authors": ["Wei-Lin Chiang", "Zhuohan Li", "Zi Lin", "Ying Sheng", "Zhanghao Wu", "Hao Zhang", "Lianmin Zheng", "Siyuan Zhuang", "Yonghao Zhuang", "Joseph E. Gonzalez", "Ion Stoica", "Eric P. Xing"], "title": "Vicuna: An opensource chatbot impressing gpt-4 with 90%* chatgpt", "year": 2023}, {"authors": ["Dasha Valter", "Sharan Narang", "Gaurav Mishra", "Adams Yu", "Vincent Zhao", "Yanping Huang", "Andrew Dai", "Hongkun Yu", "Slav Petrov", "Ed H. Chi", "Jeff Dean", "Jacob Devlin", "Adam Roberts", "Denny Zhou", "Quoc V. Le", "Jason Wei"], "title": "Scaling instruction-finetuned", "year": 2022}, {"authors": ["Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova."], "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for", "year": 2019}, {"authors": ["T Fagni", "F Falchi", "M Gambini", "A Martella", "M Tesconi"], "title": "Tweepfake: About detecting deepfake tweets", "venue": "PLOS ONE,", "year": 2021}, {"authors": ["Angela Fan", "Yacine Jernite", "Ethan Perez", "David Grangier", "Jason Weston", "Michael Auli."], "title": "ELI5: Long form question answering", "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3558\u20133567, Florence,", "year": 2019}, {"authors": ["Angela Fan", "Mike Lewis", "Yann Dauphin."], "title": "Hierarchical neural story generation", "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 889\u2013898, Melbourne, Australia. Association", "year": 2018}, {"authors": ["Sebastian Gehrmann", "Hendrik Strobelt", "Alexander Rush."], "title": "GLTR: Statistical detection and visualization of generated text", "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 111\u2013116,", "year": 2019}, {"authors": ["Biyang Guo", "Xin Zhang", "Ziyuan Wang", "Minqi Jiang", "Jinran Nie", "Yuxuan Ding", "Jianwei Yue", "Yupeng Wu."], "title": "How close is chatgpt to human experts? comparison corpus, evaluation, and detection", "venue": "ArXiv preprint, abs/2301.07597.", "year": 2023}, {"authors": ["Qiao Jin", "Bhuwan Dhingra", "Zhengping Liu", "William Cohen", "Xinghua Lu."], "title": "PubMedQA: A dataset for biomedical research question answering", "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the", "year": 2019}, {"authors": ["Armand Joulin", "Edouard Grave", "Piotr Bojanowski", "Tomas Mikolov."], "title": "Bag of tricks for efficient text classification", "venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Pa-", "year": 2017}, {"authors": ["Diederik P. Kingma", "Jimmy Ba."], "title": "Adam: A method for stochastic optimization", "venue": "3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.", "year": 2015}, {"authors": ["John Kirchenbauer", "Jonas Geiping", "Yuxin Wen", "Jonathan Katz", "Ian Miers", "Tom Goldstein."], "title": "A watermark for large language models", "venue": "ArXiv preprint, abs/2301.10226.", "year": 2023}, {"authors": ["Kalpesh Krishna", "Yixiao Song", "Marzena Karpinska", "John Wieting", "Mohit Iyyer."], "title": "Paraphrasing evades detectors of ai-generated text, but retrieval is an effective defense", "venue": "ArXiv preprint, abs/2303.13408.", "year": 2023}, {"authors": ["Thomas Lavergne", "Tanguy Urvoy", "Fran\u00e7ois Yvon."], "title": "Detecting fake content with relative entropy scoring", "venue": "PAN, 8:27\u201331.", "year": 2008}, {"authors": ["Weixin Liang", "Mert Y\u00fcksekg\u00f6n\u00fcl", "Yining Mao", "Eric Wu", "James Zou."], "title": "GPT detectors are biased against non-native english writers", "venue": "Patterns, 4(7):100779.", "year": 2023}, {"authors": ["Yinhan Liu", "Myle Ott", "Naman Goyal", "Jingfei Du", "Mandar Joshi", "Danqi Chen", "Omer Levy", "Mike Lewis", "Luke Zettlemoyer", "Veselin Stoyanov"], "title": "Roberta: A robustly optimized bert pretraining approach", "year": 2019}, {"authors": ["Andrew L. Maas", "Raymond E. Daly", "Peter T. Pham", "Dan Huang", "Andrew Y. Ng", "Christopher Potts."], "title": "Learning word vectors for sentiment analysis", "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human", "year": 2011}, {"authors": ["Hasan Mesut Meral", "B\u00fclent Sankur", "A Sumru \u00d6zsoy", "Tunga G\u00fcng\u00f6r", "Emre Sevin\u00e7."], "title": "Natural language watermarking via morphosyntactic alterations", "venue": "Computer Speech & Language, 23(1):107\u2013125.", "year": 2009}, {"authors": ["Eric Mitchell", "Yoonho Lee", "Alexander Khazatsky", "Christopher D. Manning", "Chelsea Finn."], "title": "Detectgpt: Zero-shot machine-generated text detection using probability curvature", "venue": "ArXiv preprint, abs/2301.11305.", "year": 2023}, {"authors": ["Nasrin Mostafazadeh", "Nathanael Chambers", "Xiaodong He", "Devi Parikh", "Dhruv Batra", "Lucy Vanderwende", "Pushmeet Kohli", "James Allen."], "title": "A corpus and cloze evaluation for deeper understanding of commonsense stories", "venue": "Proceedings of the 2016", "year": 2016}, {"authors": ["Preslav Nakov", "Sara Rosenthal", "Zornitsa Kozareva", "Veselin Stoyanov", "Alan Ritter", "Theresa Wilson"], "title": "SemEval-2013 task 2: Sentiment analysis", "year": 2013}, {"authors": ["Shashi Narayan", "Shay B. Cohen", "Mirella Lapata."], "title": "Don\u2019t give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization", "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Lan-", "year": 2018}, {"authors": ["OpenAI."], "title": "GPT-4 technical report", "venue": "ArXiv preprint, abs/2303.08774.", "year": 2023}, {"authors": ["F. Pedregosa", "G. Varoquaux", "A. Gramfort", "V. Michel", "B. Thirion", "O. Grisel", "M. Blondel", "P. Prettenhofer", "R. Weiss", "V. Dubourg", "J. Vanderplas", "A. Passos", "D. Cournapeau", "M. Brucher", "M. Perrot", "E. Duchesnay"], "title": "Scikit-learn: Machine learning", "year": 2011}, {"authors": ["Jiameng Pu", "Zain Sarwar", "Sifat Muhammad Abdullah", "Abdullah Rehman", "Yoonjin Kim", "Parantapa Bhattacharya", "Mobin Javed", "Bimal Viswanath."], "title": "Deepfake text detection: Limitations and opportunities", "venue": "ArXiv preprint, abs/2210.09421.", "year": 2022}, {"authors": ["Peng Qi", "Yuhao Zhang", "Yuhui Zhang", "Jason Bolton", "Christopher D. Manning."], "title": "Stanza: A python natural language processing toolkit for many human languages", "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics:", "year": 2020}, {"authors": ["Alec Radford", "Jeffrey Wu", "Rewon Child", "David Luan", "Dario Amodei", "Ilya Sutskever"], "title": "Language models are unsupervised multitask learners", "venue": "OpenAI blog,", "year": 2019}, {"authors": ["Colin Raffel", "Noam Shazeer", "Adam Roberts", "Katherine Lee", "Sharan Narang", "Michael Matena", "Yanqi Zhou", "Wei Li", "Peter J. Liu."], "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "venue": "J. Mach. Learn. Res., 21:140:1\u2013140:67.", "year": 2020}, {"authors": ["Pranav Rajpurkar", "Jian Zhang", "Konstantin Lopyrev", "Percy Liang."], "title": "SQuAD: 100,000+ questions for machine comprehension of text", "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383\u20132392, Austin,", "year": 2016}, {"authors": ["Juan Rodriguez", "Todd Hay", "David Gros", "Zain Shamsi", "Ravi Srinivasan."], "title": "Cross-domain detection of GPT-2-generated technical text", "venue": "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics:", "year": 2022}, {"authors": ["Robin Rombach", "Andreas Blattmann", "Dominik Lorenz", "Patrick Esser", "Bj\u00f6rn Ommer."], "title": "Highresolution image synthesis with latent diffusion models", "venue": "IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans,", "year": 2022}, {"authors": ["Vinu Sankar Sadasivan", "Aounon Kumar", "Sriram Balasubramanian", "Wenxiao Wang", "Soheil Feizi"], "title": "Can ai-generated text be reliably detected? ArXiv preprint, abs/2303.11156", "year": 2023}, {"authors": ["Julian Salazar", "Davis Liang", "Toan Q. Nguyen", "Katrin Kirchhoff."], "title": "Masked language model scoring", "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2699\u20132712, Online. Association for Computational", "year": 2020}, {"authors": ["drea Santilli", "Thibault F\u00e9vry", "Jason Alan Fries", "Ryan Teehan", "Teven Le Scao", "Stella Biderman", "Leo Gao", "Thomas Wolf", "Alexander M. Rush"], "title": "Multitask prompted training enables zero-shot task generalization", "year": 2022}, {"authors": ["Abigail See", "Peter J. Liu", "Christopher D. Manning."], "title": "Get to the point: Summarization with pointergenerator networks", "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073\u2013", "year": 2017}, {"authors": ["Shuming Shi", "Enbo Zhao", "Bi Wei", "Cai Deng", "Leyang Cui", "Xinting Huang", "Haiyun Jiang", "Duyu Tang", "Kaiqiang Song", "Wang Longyue", "Chengyan Huang", "Guoping Huang", "Yan Wang", "Li Piji"], "title": "Effidit: An assistant for improving writing efficiency", "year": 2023}, {"authors": ["Chenhao Tan", "Vlad Niculae", "Cristian DanescuNiculescu-Mizil", "Lillian Lee."], "title": "Winning arguments: Interaction dynamics and persuasion strategies in good-faith online discussions", "venue": "Proceedings of the 25th International Conference on World Wide", "year": 2016}, {"authors": ["Tatsunori B. Hashimoto"], "title": "Stanford alpaca: An instruction-following llama model. https:// github.com/tatsu-lab/stanford_alpaca", "year": 2023}, {"authors": ["Adaku Uchendu", "Thai Le", "Kai Shu", "Dongwon Lee."], "title": "Authorship attribution for neural text generation", "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8384\u20138395, Online. Association for", "year": 2020}, {"authors": ["Ben Wang", "Aran Komatsuzaki."], "title": "GPT-J6B: A 6 Billion Parameter Autoregressive Language Model", "venue": "https://github.com/kingoflolz/ mesh-transformer-jax.", "year": 2021}, {"authors": ["Rowan Zellers", "Ari Holtzman", "Yonatan Bisk", "Ali Farhadi", "Yejin Choi"], "title": "2019a. HellaSwag: Can a machine really finish your sentence", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics,", "year": 2019}, {"authors": ["Rowan Zellers", "Ari Holtzman", "Hannah Rashkin", "Yonatan Bisk", "Ali Farhadi", "Franziska Roesner", "Yejin Choi."], "title": "Defending against neural fake news", "venue": "Advances in Neural Information Processing Systems 32: Annual Conference on Neural Informa-", "year": 2019}, {"authors": ["Lvmin Zhang", "Maneesh Agrawala."], "title": "Adding conditional control to text-to-image diffusion models", "venue": "ArXiv preprint, abs/2302.05543.", "year": 2023}, {"authors": ["Wang", "Luke Zettlemoyer"], "title": "2022a. Opt: Open pre-trained transformer language models", "year": 2022}, {"authors": ["Xiang Zhang", "Junbo Jake Zhao", "Yann LeCun."], "title": "Character-level convolutional networks for text classification", "venue": "Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12,", "year": 2015}, {"authors": ["Yue Zhang", "Bo Zhang", "Zhenghua Li", "Zuyi Bao", "Chen Li", "Min Zhang."], "title": "SynGEC: Syntax-enhanced grammatical error correction with a tailored GECoriented parser", "venue": "Proceedings of the 2022 Conference on Empirical Methods in Natural Language", "year": 2022}, {"authors": ["Xuandong Zhao", "Yu-Xiang Wang", "Lei Li."], "title": "Protecting language generation models via invisible watermarking", "venue": "ArXiv preprint, abs/2302.03162.", "year": 2023}, {"authors": ["Wanjun Zhong", "Duyu Tang", "Zenan Xu", "Ruize Wang", "Nan Duan", "Ming Zhou", "Jiahai Wang", "Jian Yin."], "title": "Neural deepfake detection with factual structure of text", "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Process-", "year": 2020}]
|
acl_23
|
PrivLM-Bench: A Multi-level Privacy Evaluation Benchmark for Language Models
|
[{"affiliations": [], "name": "Haoran Li"}, {"affiliations": [], "name": "Dadi Guo"}, {"affiliations": [], "name": "Donghao Li"}, {"affiliations": [], "name": "Wei Fan"}, {"affiliations": [], "name": "Qi Hu"}, {"affiliations": [], "name": "Xin Liu"}, {"affiliations": [], "name": "Chunkit Chan"}, {"affiliations": [], "name": "Duanyi Yao"}, {"affiliations": [], "name": "Yuan Yao"}, {"affiliations": [], "name": "Yangqiu Song"}]
|
SP:bfb711759e990d387fcfcd9e8427e8b231a4c7fa
|
The rapid development of language models (LMs) brings unprecedented accessibility and usage for both models and users. On the one hand, powerful LMs achieve state-of-theart performance over numerous downstream NLP tasks. On the other hand, more and more attention is paid to unrestricted model accesses that may bring malicious privacy risks of data leakage. To address these issues, many recent works propose privacy-preserving language models (PPLMs) with differential privacy (DP). Unfortunately, different DP implementations make it challenging for a fair comparison among existing PPLMs. In this paper, we present PrivLM-Bench, a multi-perspective privacy evaluation benchmark to empirically and intuitively quantify the privacy leakage of LMs. Instead of only reporting DP parameters, PrivLM-Bench sheds light on the neglected inference data privacy during actual usage. PrivLM-Bench first clearly defines multifaceted privacy objectives. Then, PrivLMBench constructs a unified pipeline to perform private fine-tuning. Lastly, PrivLM-Bench performs existing privacy attacks on LMs with pre-defined privacy objectives as the empirical evaluation results. The empirical attack results are used to fairly and intuitively evaluate the privacy leakage of various PPLMs. We conduct extensive experiments on three datasets of GLUE for mainstream LMs.1
|
1 introduction :The accelerating evolution of language models (LMs) ushers a new era for both modern natural language processing and the whole society. Currently, generative large language models (LLMs) exhibit surprising capability and integrate previous tasks into a unified text generation formulation. As
*Equal contribution. 1Code is publicly available at https://github.com/
HKUST-KnowComp/PrivLM-Bench.
a result, these LLMs obtain the dominating performance on both expert-designed tasks and realworld problems (Raffel et al., 2020; Chung et al., 2022a; Brown et al., 2020; OpenAI, 2023; Ouyang et al., 2022). Moreover, under appropriate instructions, LLMs can even be in-context learners or zero-shot reasoners to solve unseen tasks (Chen et al., 2021; Zhou et al., 2023; Kojima et al., 2022; Wei et al., 2022; Sanh et al., 2022).
Beneath the improved performance, LMs’ training data also scale up with models’ sizes. LMs are not only trained on annotated textual data for specific tasks, but also devour a vast amount of textual data online. Unlike carefully crowd-sourced annotation data, free-form texts crawled from the Internet suffer from poor quality and unintended personal data leakage. For example, simple model interactions can lead to accidental personally identifiable information (PII) dissemination (Li et al., 2023a; Lukas et al., 2023; Huang et al., 2022; Carlini et al., 2021b). Such PII exposure without noticing victims or obtaining victims’ consent may violate existing privacy laws like the EU’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA).
To respect data subjects’ privacy during model training/fine-tuning, several studies (Qu et al., 2021; Yue et al., 2022; Yu et al., 2022; Igamberdiev and Habernal, 2023) consider privacy protection as an additional objective. Differential privacy (Dwork and Roth, 2014), known for its wide adaptability and application, has become mainstream for privacy-preserving LMs (PPLMs). DP’s definition offers plausible deniability (Bindschaedler et al., 2017) and introduces bounded privacy parameters to describe the effectiveness of examined mechanisms. This definition can naturally defend against membership inference attack (Shokri et al., 2016), which aims to determine if a given sample belongs to the model’s training dataset. Currently, motivated by DPSGD (Abadi
54
et al., 2016), PPLMs can be achieved via various implementations based on DP optimizers.
Unfortunately, although numerous implementations of PPLMs have been proposed, fair evaluations of PPLMs are still unexplored. Existing mainstream approaches simply use DP parameters to quantify PPLMs’ privacy and it is rather hard to make a fair comparison among PPLMs. Firstly, Different DP formulations such as central DP, local DP (Kasiviswanathan et al., 2011), and dχ privacy (Chatzikokolakis et al., 2013) assign distinct heuristic meanings to DP parameters to implement PPLMs. Secondly, the scope of the protected part is ambiguous. For instance, most PPLMs implemented from DPSGD offer privacy protection for tuned sensitive data. However, during inference, these PPLMs are not guaranteed to protect inference data privacy. Thus, simply claiming that these PPLMs are privacy-preserving ignores inference data privacy. Lastly, it remains unclear whether DP’s worst-case upper bound overestimates privacy leakage. DP assumes a herculean adversary who can manipulate the entire protected dataset, which may be implausible for actual attacks. Consequently, evaluation under the same DP parameters may still result in different privacy performance on empirical privacy attacks.
To bridge the aforementioned gap, in this work, we propose PrivLM-Bench to fairly quantify PPLMs’ privacy-utility trade-off. PrivLM-Bench adopts the prevailing setup of public pre-training and private fine-tuning with several clarified privacy objectives. PrivLM-Bench incorporates multifaceted privacy attacks, including the data extraction attack (Carlini et al., 2021b), membership inference attack (Shokri et al., 2016) and embeddinglevel privacy attack (Song and Raghunathan, 2020) to evaluate the privacy of PPLMs. The attacking results can be an intuitive and fair indicator to quantify the privacy leakage of existing PPLMs regardless of their detailed implementations. In summary, we highlight the following contributions of our proposed PrivLM-Bench:
1) PrivLM-Bench identifies inference data privacy as a key component for PPLMs’ privacy evaluation and points out that DP-tuning cannot quantify the inference data privacy after deploying PPLMs for real-life applications.
2) PrivLM-Bench provides a unified pipeline that allows fair comparisons among PPLMs.
3) We use PrivLM-Bench to conduct extensive experiments on mainstream PPLMs. Empirical
findings indicate that current privacy attacks are significantly less potent than the anticipated attacker capabilities of defense mechanisms.
|
To analyze differential privacy implementations on language models, we first introduce the formal definition of DP (Dwork and Roth, 2014):
Definition 1 (Differential Privacy). A randomized algorithm mechanism M with domain D and range R satisfies (ϵ, δ)-differential privacy if for any two neighboring datasets D,D′ and for any subsets of output O ⊆ R:
Pr[M(D) ∈ O] ≤ eϵPr[M(D′) ∈ O] + δ. (1)
The neighboring datasets D,D′ only differ in one element. When we apply Definitions 1 on LMs, D refers to the private fine-tuning dataset and M usually refers to the LM updated with DP mechanism so that the LM can be safely released while preserving D’s privacy. For our experiments, DP optimizers, such as DPSGD (Abadi et al., 2016), are used as the backbone to implement various PPLMs with DP guarantee. There are several optional techniques to build PPLMs. Homomorphic Encryption (HE) can ensure PPLM’s data privacy via encryption during the inference stage (Chen et al., 2022). Secure Multiparty Computation (SMPC) protects the privacy of shared data and model parameters between service providers and users (Wang et al., 2022; Hao et al., 2022; Luo et al., 2023, 2024). Existing works commonly exploit various DP mechanisms to implement DP-based LMs with respect to given finetuning corpus and can be summarized into 3 categories: 1): DP fine-tuning with DP optimizers (Qu et al., 2021; Shi et al., 2022; Mattern et al., 2022; Yue et al., 2022; Li et al., 2022b; Yu et al., 2022) is a prevailing approach to protect the private finetuning datasets. 2): DP prompt tuning (Duan et al., 2023; Li et al., 2023c) adds noise to soft prompts and performs private prompt tuning (Lester et al., 2021; Li and Liang, 2021) with LMs’ parameters frozen. 3): Embedding perturbation (Igamberdiev and Habernal, 2023; Feyisetan et al., 2020; Krishna et al., 2021) injects DP noise into the intermediate representations to implement PPLMs.
For privacy evaluation, most of these works simply report (ϵ, δ) pairs as the privacy budget. Still, a few works endeavor to measure or explain privacy alternatively. Auditing mechanisms (Nasr et al., 2023; Jagielski et al., 2020; Lu et al., 2022) aim to audit empirical privacy leakage. Feyisetan et al. (2020) proposed plausible deniability statistics to quantify plausible deniability given DP parameters. Li et al. (2023c) and Du et al. (2023) evaluated privacy via empirical embedding-level attacks (Song and Raghunathan, 2020). Motivated by such empirical evaluations, PrivLM-Bench further clarifies the privacy objectives and integrates more attacks for empirical privacy evaluation.
| null | null |
In this paper, we introduce PrivLM-Bench, a benchmark designed to assess and contrast LMs’ multifaceted privacy objectives. By integrating a variety of masked and generative LMs with diverse tuning algorithms, PrivLM-Bench facilitates an extensive evaluation that encompasses both utility metrics and empirical privacy attacks. Our results reveal the effectiveness and limitations of various DP tuning methods. Moreover, we point out the reality that empirical privacy attacks demonstrate a less potent threat compared to defense mechanisms’ assumed powerful capabilities. In conclusion, our study underscores that privacy evaluation and the balancing act between privacy and utility is a complex, ongoing challenge in the field. For future work, we advocate for more potent privacy attacks and utility-enhanced defense strategies that relax the worst-case restriction in accordance with empirical attacks to improve the privacy utility trade-off.
Limitations
In our evaluation of language model privacy from an adversarial standpoint, we acknowledge certain limitations in the covered scope and effectiveness of the proposed attacks. Firstly, our study does not encompass all attack methodologies, notably excluding the recent trend of prompt injection attacks, which are significant in assessing the safety of large language models. This omission represents an area for potential future exploration to provide a more comprehensive understanding of LLM vulnerabilities. Secondly, the efficacy of our attacks in certain scenarios was limited. For instance, our LiRA-based MIAs’ experiments demonstrated inadequate performance in discerning membership status in non-differentially private (non-DP) tuned BERT models.
In addition, from defenders’ perspectives, though DP tuned models can well protect LMs from inferring sensitive fine-tuning data, many DP tuned LMs suffer from poor utility. Moreover, evaluated DP tuning strategies cannot defend against inference stage data privacy.
In summary, these limitations emphasize the need for ongoing development in the field of LM privacy attacks and defenses.
Ethical Considerations
We declare that all authors of this paper acknowledge the ACM Code of Ethics and honor the code of conduct. This work comprehensively evaluates the empirical privacy of LMs and PPLMs via existing privacy attacks. The purpose of these attacks is not to corrupt given LMs. Instead, we aim to test LMs’ robustness against known attacks and show that DP can well protect the sensitive tuning data while fails to guarantee inference data privacy. Our findings reveal that LLM still needs further improvement for the better privacy-utility trade-off.
Data. During our experiment, besides downstream fine-tuning data from GLUE tasks, we also pre-define several canary patterns with randomly generated or LLM-generated PII. Since these canaries do not include any actual personal information, our inserted canaries can be safe to release and use.
Attacks. We are well aware that our evaluated privacy attacks may be used for malicious purposes. However, our experimental results that these empirical privacy attacks are rather weak in terms of privacy attacks and can be easily defended with
defense mechanisms. For example, data extraction attacks can be easily addressed with content filters to avoid unsafe responses. We emphasize that these empirical privacy attacks are beneficial in enhancing defense strategies.
|
Existing PPLMs evaluate their claimed improvement over tailored downstream tasks. These spe-
cific tasks may not be feasible for other PPLMs. Instead, PrivLM-Bench evaluates PPLMs in a more fundamental aspect for natural language understanding (NLU). NLU is essential for general LMs to identify the meaning of given texts. PrivLMBench evaluates PPLMs on several NLU tasks from GLUE (Wang et al., 2018) that include Natural Language Inference (MNLI) (Williams et al., 2018), Stanford Sentiment Treebank v2 (SST2) (Socher et al., 2013) and QNLI converted from Stanford Question Answering Dataset (Rajpurkar et al., 2016). PrivLM-Bench’s evaluated datasets, MNLI, SST2 and QNLI, can be naturally formulated as classification tasks with given labels. However, besides BERT-style masked LMs, PPLMs also include generative LMs such as GPT-2 that behave poorly for conventional classification pipelines. We additionally transform evaluated datasets to fit the generation pipeline. Inspired by T5’s formulation (Raffel et al., 2020), given the premises and hypotheses of NLI datasets or sentences from SST2 with corresponding integer labels, we manually create textual templates with labels and concatenate prefix sentences with templates. For example, given a sample includes premise, hypothesis with label 0, the converted sample for generation becomes a single sentence [premise <SEP> hypothesis <SEP> The relation is 0] where <SEP> is the special separator token. We keep the integer label in the transformed sentence for easier text-infilling formulation. After the pre-processing, PrivLM-Bench can evaluate most existing PPLMs. Since PrivLM-Bench unifies both classification and generation tasks, various model architectures with different pre-trained weights can be evaluated by PrivLM-Bench. Specifically, We evaluate BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019), GPT-2 (Radford et al., 2019), T5 (Raffel et al., 2020) and FLAN-T5 (Chung et al., 2022b) of different scales with four tuning algorithms with and without the DP guarantee. The following content gives a brief summary of our evaluated tuning algorithms.
Full fine-tuning. Full fine-tuning refers to the commonly used methods to update the whole model. For masked LMs such as BERT and
RoBERTa, we append one extra linear layer to perform classification. For generative LMs, including GPT-2 and T5, we use language modeling head with language modeling loss to conduct the next token generation.
Prompt tuning (Lester et al., 2021). For prompt tuning, we freeze the LMs and prepend certain learnable virtual tokens to every data sample. Instead of updating the whole LMs, prompt tuning only optimizes these task-specific virtual tokens.
Prefix tuning (Li and Liang, 2021). Prefix tuning shares a similar idea as prompt tuning. Unlike prompt tuning which appends a few tokens in the beginning, prefix tuning attaches a sequence of continuous task-specific vectors in front of inputs. These appended continuous vectors are concatenated into LMs’ hidden states in every transformer layer as well as inputs. Only these appended vectors are updated throughout prefix tuning.
Infilling based tuning (Petroni et al., 2019). For masked LMs, instead of appending classifiers to their final output representations, infilling the masked tokens can also be exploited to perform classification tasks. We follow the pre-processing pipeline to leave blanks for predicted positions, like [premise <SEP> hypothesis <SEP> The relation is <MASK>], where <MASK> refers to the mask token. For infilling-based tuning, we follow previous works to update the whole LMs. Data split. Both membership inference attacks and embedding-level attacks require auxiliary datasets that share similar distributions with original datasets. For each evaluated dataset, we randomly split 40% training data as the auxiliary dataset and use the remaining 60% to conduct downstream tuning.
DP Parameters. During our experiments, we follow previous studies (Li et al., 2022c; Yu et al., 2021) to strictly bound δ = 1e-5 and ϵ = 8. Additionally, we bound the gradient norms to be no more than 0.1.
Tuning parameters. For all four tuning algorithms, we use adamW as the optimizer with a linear decay. We train all models for 5 epochs with a virtual batch size of 1,024. In terms of learning rates, for fine-tuning and infilling, the learning rate is 1e-4; for prompt and prefix tuning, the learning rate is 1e-2. We set 15 virtual tokens for optimization for both prompt and prefix tuning. For experiments, we first raise a few crucial research questions (RQs) and use our experimental results to address these RQs individually. • RQ1: Do LMs share similar utility under the same DP budget? If not, what are the factors that affect LMs’ utility? • RQ2: Do various tuning algorithms yield similar performance on the same model? • RQ3: Are empirical privacy attacks effective on LMs with and without privacy protection? By fixing values of (ϵ, δ) pairs during DP tuning, we can compare PPLMs’ utility between masked LMs and generative LMs. In Table 2, we comprehensively list various LMs’ utility of different scales on the MNLI dataset. These results suggest that LMs’ utility is affected by multiple factors, including model architectures, pre-trained weights, model sizes, tuning algorithms and DP constraints.
Model architectures with pre-trained weights. By fixing a similar model size, for masked LMs, RoBERTa models outperform BERT models with around 10% improved accuracy on fine-tuning and prefix-tuning and share comparable accuracy with BERT models on prompt-tuning. In addition, under the same T5 model architecture of a similar size, FLAN-T5xl still significantly surpasses T5xl on both DP and non-DP settings. Such utility improvement is likely to come from FLAN-T5’s largescale instruction tuning. Moreover, for generative LMs, T5base even significantly surpasses GPT-2 models from the small size to the xl size in terms of fine-tuning and prefix tuning. These results suggest that model architectures, including their pre-trained weights, play a crucial role in downstream tasks’ utility for tuning with and without DP constraints.
Model size. For the same model, after observing the accuracy of different model scales of BERT, RoBERTa, GPT-2 and T5, we can see that increased model sizes are likely to bring better utility for both DP and non-DP tuning.
DP constraints. Under the exact tuning method of the same model, we can still observe that DP tuning leads to non-negligible utility degradation from fine-tuning to infilling.
Tuning algorithms. Additionally, results from Table 2 suggest that distinct LMs have varied resilience when we switch from non-DP tuning to DP tuning. For instance, for a given LM, fine-tuning mostly yields the best results with and without DP tuning and hurts ~10% accuracy for DP tuning. However, when it comes to prompt and prefix tuning for masked LMs, severe accuracy drops can be observed. Moreover, prompt tuning and prefix tuning are unreliable even for non-DP tuning for generative LMs such as GPT-2 models. These results indicate that prefix tuning is better than prompt tuning for both DP and non-DP tuning under the same virtual token numbers. For infilling-based knowledge probing, our results suggest that even though infilling has comparable performance with fine-tuning on the non-DP setting, infilling on noisily updates may suffer poorer results than direct fine-tuning for DP tuning. In this section, we perform privacy evaluations of the mainstream LMs with and without DP tuning. In Table 3, we list masked LMs’ privacy evaluation with empirical privacy attacks for both fine-tuning data privacy (MIAs) and inference data privacy (EIAs). For generative LMs, we report the mean exposure of GPT-2large and T5large in Figure 2.
MIAs. For MIAs, all DP-tuned LMs achieve
AUC scores around 0.5, indicating that MIAs on these models are no better than random guessing. Hence, DP-tuned LMs conform to DP’s definition and offer robust fine-tuning data protection against MIAs. Conversely, for fine-tuned LMs without DP, MIAs are effective on RoBERTa models with AUC exceeding 0.6, showing that fine-tuned RoBERTa models are susceptible to MIAs. In contrast, MIAs on BERT models without DP only gain marginal improvement of no more than 1.4% compared to DP-tuned BERT models across three tuning algorithms. These findings underscore a critical gap in the field of privacy attacks. The actual performance of attacks falls short of the level of threat assumed by existing defense mechanisms.
EIAs. For EIAs, since fine-tuning data privacy is not guaranteed by DP-tuning, we study the inference data privacy with and without DP tuning. After comparing EIAs’ performance differences between DP and non-DP settings among all models
in Table 3, we can observe the minimal variation with no more than 2% deviation. This observation indicates that DP tuning does not significantly alter the level of inference data leakage in EIAs compared to non-DP settings. Thus, naive DP tuning on sensitive data cannot protect inference data privacy and requires additional privacy mechanisms to support the general privacy protection claim.
DEAs. For DEAs, we utilize canaries’ exposure to measure generative LMs’ training data privacy by increasing occurrences of certain canaries. As shown in the scatter plots of Figure 2, the results reveal that DP-tuning is effective against DEAs with reduced exposure in comparison with corresponding non-DP tuning. For DP-based tuning algorithms, even with the increased insertion numbers, DP-based prompt tuning and prefix tuning are effective against DEAs with low exposure. This underscores the effectiveness of DP-based methods in mitigating DEAs. However, when occur-
rences of given canaries increase, DP fine-tuning still suffers from non-neglected increased exposure, which requires larger noise to mitigate the exposure. On the other hand, in terms of non-DP baselines, fine-tuning and prefix tuning suffer from high exposure when the insertion number increases, which conforms to previous works’ findings on fine-tuning (Carlini et al., 2021b). Unexpectedly, our results show that prompt tuning can achieve relatively low exposure with and without DP. NonDP prompt tuning yields lower exposure even than DP-based prefix tuning and fine-tuning.
Upon analyzing the results, we summarize the following findings. 1) DP-tuning is robust against existing privacy attacks targeted at fine-tuning data. Regardless of tuning methods, DP’s worst-case bounding can strictly prevent the adversary from identifying sensitive fine-tuning data. 2) Naive DPtuning falls short in safeguarding inference data privacy. Our analysis reveals that DP-based methods do not significantly outperform non-DP baselines in preventing EIAs. 3) The capability of existing privacy attacks does not align with the level of threat presumed by defense strategies. Notably, several non-DP tuning methods effectively resisted the evaluated attacks. For MIAs, we show that LiRA is unsuccessful on non-DP BERT models. In this section, we list other empirical findings according to the evaluation results.
1) Large generative LMs have the potential to outperform masked LMs with DP on NLU. Our analysis, as presented in Table 2, reveals that FLAN-T5xl with DP-based prefix tuning, can match or even surpass the performance of masked
LMs with DP in NLU tasks. 2) Vulnerabilities to privacy attacks are modeldependent. This is evident from the clustered patterns of attack performance for the same models of different sizes, as observed in Table 3 and Figure 2. This suggests that certain architectures may inherently possess more resilience to privacy breaches.
3) Parameter-efficient tuning methods are more resistant to privacy attacks. In terms of attack results under non-DP settings, both prompt and prefix tuning exhibit greater robustness compared to full fine-tuning. This is particularly notable in their performance against MIAs and DEAs, indicating that these tuning methods inherently offer enhanced privacy safeguards.
|
For future work, we advocate for more potent privacy attacks and utility-enhanced defense strategies that relax the worst-case restriction in accordance with empirical attacks to improve the privacy utility trade-off. This omission represents an area for potential future exploration to provide a more comprehensive understanding of LLM vulnerabilities. This omission represents an area for potential future exploration to provide a more comprehensive understanding of LLM vulnerabilities
|
In our evaluation of language model privacy from an adversarial standpoint, we acknowledge certain limitations in the covered scope and effectiveness of the proposed attacks. Firstly, our study does not encompass all attack methodologies, notably excluding the recent trend of prompt injection attacks, which are significant in assessing the safety of large language models. This omission represents an area for potential future exploration to provide a more comprehensive understanding of LLM vulnerabilities. Secondly, the efficacy of our attacks in certain scenarios was limited. For instance, our LiRA-based MIAs’ experiments demonstrated inadequate performance in discerning membership status in non-differentially private (non-DP) tuned BERT models.
In addition, from defenders’ perspectives, though DP tuned models can well protect LMs from inferring sensitive fine-tuning data, many DP tuned LMs suffer from poor utility. Moreover, evaluated DP tuning strategies cannot defend against inference stage data privacy.
In summary, these limitations emphasize the need for ongoing development in the field of LM privacy attacks and defenses.
|
[{"authors": ["Mart\u00edn Abadi", "Andy Chu", "Ian J. Goodfellow", "H.B. McMahan", "Ilya Mironov", "Kunal Talwar", "Li Zhang."], "title": "Deep learning with differential privacy", "venue": "Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security.", "year": 2016}, {"authors": ["Vincent Bindschaedler", "Reza Shokri", "Carl Gunter."], "title": "Plausible deniability for privacy-preserving data synthesis", "venue": "Proceedings of the VLDB Endowment, 10:481\u2013492.", "year": 2017}, {"authors": ["Nicholas Carlini", "Steve Chien", "Milad Nasr", "Shuang Song", "A. Terzis", "Florian Tram\u00e8r."], "title": "Membership inference attacks from first principles", "venue": "2022 IEEE Symposium on Security and Privacy (SP), pages 1897\u20131914.", "year": 2021}, {"authors": ["Nicholas Carlini", "Chang Liu", "\u00dalfar Erlingsson", "Jernej Kos", "Dawn Song."], "title": "The secret sharer: Evaluating and testing unintended memorization in neural networks", "venue": "Proceedings of the 28th USENIX Conference on Security Symposium, SEC\u201919, page", "year": 2019}, {"authors": ["Nicholas Carlini", "Florian Tramer", "Eric Wallace", "Matthew Jagielski", "Ariel Herbert-Voss", "Katherine Lee", "Adam Roberts", "Tom Brown", "Dawn Song", "Ulfar Erlingsson", "Alina Oprea", "Colin Raffel"], "title": "Extracting training data from large language", "year": 2021}, {"authors": ["Konstantinos Chatzikokolakis", "Miguel E. Andr\u00e9s", "Nicol\u00e1s Emilio Bordenabe", "Catuscia Palamidessi."], "title": "Broadening the scope of differential privacy using metrics", "venue": "Privacy Enhancing Technologies, pages 82\u2013102, Berlin, Heidelberg. Springer Berlin", "year": 2013}, {"authors": ["Sutskever", "Wojciech Zaremba."], "title": "Evaluating large language models trained on code", "venue": "ArXiv, abs/2107.03374.", "year": 2021}, {"authors": ["Tianyu Chen", "Hangbo Bao", "Shaohan Huang", "Li Dong", "Binxing Jiao", "Daxin Jiang", "Haoyi Zhou", "Jianxin Li."], "title": "The-x: Privacy-preserving transformer inference with homomorphic encryption", "venue": "arXiv preprint arXiv:2206.00216.", "year": 2022}, {"authors": ["Narang", "Gaurav Mishra", "Adams Yu", "Vincent Zhao", "Yanping Huang", "Andrew Dai", "Hongkun Yu", "Slav Petrov", "Ed H. Chi", "Jeff Dean", "Jacob Devlin", "Adam Roberts", "Denny Zhou", "Quoc V. Le", "Jason Wei"], "title": "Scaling instruction-finetuned language", "year": 2022}, {"authors": ["Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova."], "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for", "year": 2019}, {"authors": ["Minxin Du", "Xiang Yue", "Sherman S.M. Chow", "Huan Sun."], "title": "Sanitizing sentence embeddings (and labels) for local differential privacy", "venue": "Proceedings of the ACM Web Conference 2023, WWW \u201923, page 2349\u20132359, New York, NY, USA. Association for", "year": 2023}, {"authors": ["Haonan Duan", "Adam Dziedzic", "Nicolas Papernot", "Franziska Boenisch."], "title": "Flocks of stochastic parrots: Differentially private prompt learning for large language models", "venue": "arXiv preprint arXiv:2305.15594.", "year": 2023}, {"authors": ["C. Dwork", "A. Roth."], "title": "The algorithmic foundations of differential privacy", "venue": "The Algorithmic Foundations of Differential Privacy, pages 19\u201320.", "year": 2014}, {"authors": ["Oluwaseyi Feyisetan", "Borja Balle", "Thomas Drake", "Tom Diethe"], "title": "Privacy- and utility-preserving textual analysis via calibrated multivariate perturbations", "venue": "In Proceedings of the 13th International Conference on Web Search and Data Mining,", "year": 2020}, {"authors": ["Kang Gu", "Ehsanul Kabir", "Neha Ramsurrun", "Soroush Vosoughi", "Shagufta Mehnaz."], "title": "Towards sentence level inference attack against pre-trained language models", "venue": "Proc. Priv. Enhancing Technol., 2023:62\u201378.", "year": 2023}, {"authors": ["Meng Hao", "Hongwei Li", "Hanxiao Chen", "Pengzhi Xing", "Guowen Xu", "Tianwei Zhang."], "title": "Iron: Private inference on transformers", "venue": "Advances in Neural Information Processing Systems, 35:15718\u201315731.", "year": 2022}, {"authors": ["Jie Huang", "Hanyin Shao", "Kevin Chen-Chuan Chang"], "title": "Are large pre-trained language models leaking your personal information? In Findings of the Association for Computational Linguistics: EMNLP 2022", "year": 2022}, {"authors": ["Timour Igamberdiev", "Ivan Habernal."], "title": "Dp-bart for privatized text rewriting under local differential privacy", "venue": "Findings of the Association for Computational Linguistics: ACL 2023, page (to appear), Toronto, Canada. Association for Computational Lin-", "year": 2023}, {"authors": ["Matthew Jagielski", "Jonathan Ullman", "Alina Oprea."], "title": "Auditing differentially private machine learning: How private is private sgd? In Proceedings of NIPS 2020, NIPS\u201920, Red Hook, NY, USA", "venue": "Curran Associates Inc.", "year": 2020}, {"authors": ["Shiva Prasad Kasiviswanathan", "Homin K Lee", "Kobbi Nissim", "Sofya Raskhodnikova", "Adam Smith"], "title": "What can we learn privately", "venue": "SIAM Journal on Computing,", "year": 2011}, {"authors": ["Takeshi Kojima", "Shixiang (Shane) Gu", "Machel Reid", "Yutaka Matsuo", "Yusuke Iwasawa"], "title": "Large language models are zero-shot reasoners", "venue": "In Advances in Neural Information Processing Systems,", "year": 2022}, {"authors": ["Satyapriya Krishna", "Rahul Gupta", "Christophe Dupuy."], "title": "ADePT: Auto-encoder based differentially private text transformation", "venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main", "year": 2021}, {"authors": ["Brian Lester", "Rami Al-Rfou", "Noah Constant."], "title": "The power of scale for parameter-efficient prompt tuning", "venue": "Proceedings of the EMNLP 2021, pages 3045\u20133059, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.", "year": 2021}, {"authors": ["Haoran Li", "Dadi Guo", "Wei Fan", "Mingshi Xu", "Jie Huang", "Fanpu Meng", "Yangqiu Song"], "title": "Multi-step jailbreaking privacy attacks on chatgpt", "year": 2023}, {"authors": ["Haoran Li", "Yangqiu Song", "Lixin Fan."], "title": "You don\u2019t know my favorite color: Preventing dialogue representations from revealing speakers\u2019 private personas", "venue": "Proceedings of the 2022 Conference of the North American Chapter of the Association for", "year": 2022}, {"authors": ["Haoran Li", "Mingshi Xu", "Yangqiu Song."], "title": "Sentence embedding leaks more information than you expect: Generative embedding inversion attack to recover the whole sentence", "venue": "Findings of the Association for Computational Linguistics: ACL 2023,", "year": 2023}, {"authors": ["Xiang Lisa Li", "Percy Liang."], "title": "Prefix-tuning: Optimizing continuous prompts for generation", "venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language", "year": 2021}, {"authors": ["Xuechen Li", "Daogao Liu", "Tatsunori Hashimoto", "Huseyin A Inan", "Janardhan Kulkarni", "YinTat Lee", "Abhradeep Guha Thakurta"], "title": "2022b. When does differentially private learning not suffer in high dimensions", "year": 2022}, {"authors": ["Xuechen Li", "Florian Tramer", "Percy Liang", "Tatsunori Hashimoto."], "title": "Large language models can be strong differentially private learners", "venue": "International Conference on Learning Representations.", "year": 2022}, {"authors": ["Yansong Li", "Zhixing Tan", "Yang Liu."], "title": "Privacypreserving prompt tuning for large language model services", "venue": "arXiv preprint arXiv:2305.06212.", "year": 2023}, {"authors": ["Yinhan Liu", "Myle Ott", "Naman Goyal", "Jingfei Du", "Mandar Joshi", "Danqi Chen", "Omer Levy", "Mike Lewis", "Luke Zettlemoyer", "Veselin Stoyanov."], "title": "Roberta: A robustly optimized bert pretraining approach", "venue": "ArXiv, abs/1907.11692.", "year": 2019}, {"authors": ["Fred Lu", "Joseph Munoz", "Maya Fuchs", "Tyler LeBlond", "Elliott V. Zaresky-Williams", "Edward Raff", "Francis Ferraro", "Brian Testa."], "title": "A general framework for auditing differentially private machine learning", "venue": "Proceedings of NIPS 2022.", "year": 2022}, {"authors": ["Nils Lukas", "A. Salem", "Robert Sim", "Shruti Tople", "Lukas Wutschitz", "Santiago Zanella-B\u2019eguelin"], "title": "Analyzing leakage of personally identifiable information in language models. ArXiv, abs/2302.00539", "year": 2023}, {"authors": ["Jinglong Luo", "Yehong Zhang", "Jiaqi Zhang", "Xin Mu", "Hui Wang", "Yue Yu", "Zenglin Xu"], "title": "Secformer: Towards fast and accurate privacy-preserving inference for large language models", "year": 2024}, {"authors": ["Justus Mattern", "Zhijing Jin", "Benjamin Weggenmann", "Bernhard Schoelkopf", "Mrinmaya Sachan."], "title": "Differentially private language models for secure data sharing", "venue": "Proceedings of EMNLP 2022, pages 4860\u20134873, Abu Dhabi, United Arab Emirates. As-", "year": 2022}, {"authors": ["John X. Morris", "Volodymyr Kuleshov", "Vitaly Shmatikov", "Alexander M. Rush"], "title": "Text embeddings reveal (almost) as much as text", "year": 2023}, {"authors": ["Milad Nasr", "Jamie Hayes", "Thomas Steinke", "Borja Balle", "Florian Tram\u00e8r", "Matthew Jagielski", "Nicholas Carlini", "Andreas Terzis."], "title": "Tight auditing of differentially private machine learning", "venue": "arXiv preprint arXiv:2302.07956.", "year": 2023}, {"authors": ["OpenAI."], "title": "Gpt-4 technical report", "venue": "ArXiv, abs/2303.08774.", "year": 2023}, {"authors": ["Paul Christiano", "Jan Leike", "Ryan Lowe."], "title": "Training language models to follow instructions with human feedback", "venue": "Advances in Neural Information Processing Systems.", "year": 2022}, {"authors": ["James Jie Pan", "Jianguo Wang", "Guoliang Li."], "title": "Survey of vector database management systems", "venue": "arXiv preprint arXiv:2310.14021.", "year": 2023}, {"authors": ["Xudong Pan", "Mi Zhang", "Shouling Ji", "Min Yang."], "title": "Privacy risks of general-purpose language models", "venue": "Proceedings of 2020 IEEE Symposium on Security and Privacy (SP), pages 1314\u20131331.", "year": 2020}, {"authors": ["Fabio Petroni", "Tim Rockt\u00e4schel", "Sebastian Riedel", "Patrick Lewis", "Anton Bakhtin", "Yuxiang Wu", "Alexander Miller"], "title": "Language models as knowledge bases", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Pro-", "year": 2019}, {"authors": ["Chen Qu", "Weize Kong", "Liu Yang", "Mingyang Zhang", "Michael Bendersky", "Marc Najork."], "title": "Natural language understanding with privacy-preserving bert", "venue": "Proceedings of the 30th ACM International Conference on Information & Knowledge Management,", "year": 2021}, {"authors": ["Alec Radford", "Jeff Wu", "Rewon Child", "David Luan", "Dario Amodei", "Ilya Sutskever"], "title": "Language models are unsupervised multitask learners", "year": 2019}, {"authors": ["Colin Raffel", "Noam Shazeer", "Adam Roberts", "Katherine Lee", "Sharan Narang", "Michael Matena", "Yanqi Zhou", "Wei Li", "Peter J. Liu."], "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "venue": "Journal of Machine Learning Research,", "year": 2020}, {"authors": ["Pranav Rajpurkar", "Jian Zhang", "Konstantin Lopyrev", "Percy Liang."], "title": "SQuAD: 100,000+ questions for machine comprehension of text", "venue": "Proceedings of EMNLP 2016, pages 2383\u20132392. Association for Computational Linguistics.", "year": 2016}, {"authors": ["drea Santilli", "Thibault Fevry", "Jason Alan Fries", "Ryan Teehan", "Teven Le Scao", "Stella Biderman", "Leo Gao", "Thomas Wolf", "Alexander M Rush"], "title": "Multitask prompted training enables zero-shot task generalization", "venue": "In International Conference on Learning", "year": 2022}, {"authors": ["Weiyan Shi", "Ryan Shea", "Si Chen", "Chiyuan Zhang", "Ruoxi Jia", "Zhou Yu."], "title": "Just fine-tune twice: Selective differential privacy for large language models", "venue": "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages", "year": 2022}, {"authors": ["R. Shokri", "Marco Stronati", "Congzheng Song", "Vitaly Shmatikov."], "title": "Membership inference attacks against machine learning models", "venue": "2017 IEEE Symposium on Security and Privacy (SP), pages 3\u201318.", "year": 2016}, {"authors": ["Richard Socher", "Alex Perelygin", "Jean Wu", "Jason Chuang", "Christopher D. Manning", "Andrew Ng", "Christopher Potts."], "title": "Recursive deep models for semantic compositionality over a sentiment treebank", "venue": "Proceedings of EMNLP 2013, pages 1631\u20131642,", "year": 2013}, {"authors": ["Congzheng Song", "Ananth Raghunathan."], "title": "Information leakage in embedding models", "venue": "Proceedings of ACM CCS 2020, page 377\u2013390.", "year": 2020}, {"authors": ["Toni Taipalus"], "title": "Vector database management systems: Fundamental concepts, use-cases, and current challenges", "year": 2023}, {"authors": ["Alex Wang", "Amanpreet Singh", "Julian Michael", "Felix Hill", "Omer Levy", "Samuel Bowman."], "title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding", "venue": "Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing", "year": 2018}, {"authors": ["Jianguo Wang", "Xiaomeng Yi", "Rentong Guo", "Hai Jin", "Peng Xu", "Shengjun Li", "Xiangyu Wang", "Xiangzhou Guo", "Chengming Li", "Xiaohai Xu"], "title": "Milvus: A purpose-built vector data management system", "venue": "In Proceedings of the 2021 International Conference", "year": 2021}, {"authors": ["Yongqin Wang", "G Edward Suh", "Wenjie Xiong", "Benjamin Lefaudeux", "Brian Knott", "Murali Annavaram", "Hsien-Hsin S Lee."], "title": "Characterization of mpc-based private inference for transformer-based models", "venue": "2022 IEEE International Symposium on", "year": 2022}, {"authors": ["Jason Wei", "Xuezhi Wang", "Dale Schuurmans", "Maarten Bosma", "brian ichter", "Fei Xia", "Ed H. Chi", "Quoc V Le", "Denny Zhou"], "title": "2022. Chain of thought prompting elicits reasoning in large language models", "venue": "In Advances in Neural Information Processing Systems", "year": 2022}, {"authors": ["Adina Williams", "Nikita Nangia", "Samuel Bowman."], "title": "A broad-coverage challenge corpus for sentence understanding through inference", "venue": "Proceedings of NAACL 2018, pages 1112\u20131122. Association for Computational Linguistics.", "year": 2018}, {"authors": ["Da Yu", "Saurabh Naik", "Arturs Backurs", "Sivakanth Gopi", "Huseyin A Inan", "Gautam Kamath", "Janardhan Kulkarni", "Yin Tat Lee", "Andre Manoel", "Lukas Wutschitz", "Sergey Yekhanin", "Huishuai Zhang."], "title": "Differentially private fine-tuning of language models", "venue": "In", "year": 2022}, {"authors": ["Da Yu", "Huishuai Zhang", "Wei Chen", "Jian Yin", "Tie-Yan Liu"], "title": "Large scale private learning via low-rank reparametrization", "venue": "In Proceedings of the 38th International Conference on Machine Learning,", "year": 2021}, {"authors": ["Xiang Yue", "Huseyin A Inan", "Xuechen Li", "Girish Kumar", "Julia McAnallen", "Huan Sun", "David Levitan", "Robert Sim."], "title": "Synthetic text generation with differential privacy: A simple and practical recipe", "venue": "Proceedings of ACL 2023.", "year": 2022}, {"authors": ["Denny Zhou", "Nathanael Sch\u00e4rli", "Le Hou", "Jason Wei", "Nathan Scales", "Xuezhi Wang", "Dale Schuurmans", "Claire Cui", "Olivier Bousquet", "Quoc V Le", "Ed H. Chi."], "title": "Least-to-most prompting enables complex reasoning in large language models", "venue": "The", "year": 2023}]
|
acl_23
|
GenTranslate: Large Language Models are Generative Multilingual Speech and Machine Translators
| "[{\"affiliations\": [], \"name\": \"Yuchen Hu\"}, {\"affiliations\": [], \"name\": \"Chen Chen\"}, (...TRUNCATED)
|
SP:655dbbdc1661d224fff5e595e5a7ecf423061827
| "Recent advances in large language models (LLMs) have stepped forward the development of multilingua(...TRUNCATED)
| "1 introduction :Recent advances in large language models (LLMs) have attracted a surge of research (...TRUNCATED)
| "There is recently a surge of research interests in Transformer-based large language models, such as(...TRUNCATED)
| "In this section, we introduce the proposed method. First, we describe the latest foundational trans(...TRUNCATED)
| null | "In this paper, we propose a generative paradigm for translation tasks, namely GenTranslate, which l(...TRUNCATED)
| "LLMs. We select the popular LLaMA-2 (Touvron et al., 2023b) for our paradigm. Specifically, we empl(...TRUNCATED)
|
Therefore, future work may focus on how to better engage LLMs into the translation part
| null | "[{\"authors\": [\"Rohan Anil\", \"Andrew M Dai\", \"Orhan Firat\", \"Melvin Johnson\", \"Dmitry Lep(...TRUNCATED)
|
acl_23
|
Exploring Chain-of-Thought for Multi-modal Metaphor Detection
| "[{\"affiliations\": [], \"name\": \"Yanzhi Xu\"}, {\"affiliations\": [], \"name\": \"Yueying Hua\"}(...TRUNCATED)
|
SP:96198fcd7c5a5d8b3280fa7864a1ac0892482d13
| "Metaphors are commonly found in advertising and internet memes. However, the free form of internet (...TRUNCATED)
| "1 introduction :Metaphors are highly prevalent in our everyday expressions and writings, which can (...TRUNCATED)
| "Early metaphor detection tasks were confined to a single modality and employed methods based on rul(...TRUNCATED)
| "We propose a novel framework called C4MMD using MLLMs to enhance metaphor detection. We first intro(...TRUNCATED)
| null | "Our study aimed to tackle the challenges of multimodal metaphor interpretation by leveraging advanc(...TRUNCATED)
| "In this section, we begin by introducing the dataset used to validate our method, as well as the ex(...TRUNCATED)
| "This work not only advances multi-modal metaphor detection but also paves the way for future resear(...TRUNCATED)
| "We believe the main limitation of our work lies in only testing our metaphor detection ability with(...TRUNCATED)
| "[{\"authors\": [\"Khalid Alnajjar\", \"Mika H\\u00e4m\\u00e4l\\u00e4inen\", \"Shuo Zhang.\"], \"tit(...TRUNCATED)
|
acl_23
|
BitDistiller: Unleashing the Potential of Sub-4-Bit LLMs via Self-Distillation
| "[{\"affiliations\": [], \"name\": \"Dayou Du\"}, {\"affiliations\": [], \"name\": \"Yijia Zhang\"},(...TRUNCATED)
|
SP:13d666c5a178a1336a6787584c5f570d397ef3a9
| "The upscaling of Large Language Models (LLMs) has yielded impressive advances in natural language p(...TRUNCATED)
| "1 introduction :Scaling up model sizes has been pivotal to the success of large language models (LL(...TRUNCATED)
| "PTQ and QAT PTQ is directly applied to pretrained models without additional training. PTQ for LLMs (...TRUNCATED)
| "In this section, we introduce BitDistiller, a QAT with self-distillation framework for LLMs, as ill(...TRUNCATED)
| null | "BitDistiller leverages QAT with self-distillation to boost sub-4-bit LLM performance. The asymmetri(...TRUNCATED)
| "We evaluate BitDistiller on the LLaMA-2 (Touvron et al., 2023) families and domain-specific LLMs wi(...TRUNCATED)
| "Limitations\nDespite the promising results demonstrated by BitDistiller, it is important to acknowl(...TRUNCATED)
| "Despite the promising results demonstrated by BitDistiller, it is important to acknowledge certain (...TRUNCATED)
| "[{\"authors\": [\"Rishabh Agarwal\", \"Nino Vieillard\", \"Piotr Stanczyk\", \"Sabela Ramos\", \"Ma(...TRUNCATED)
|
acl_23
|
A Unified Temporal Knowledge Graph Reasoning Model Towards Interpolation and Extrapolation
| "[{\"affiliations\": [], \"name\": \"Kai Chen\"}, {\"affiliations\": [], \"name\": \"Ye Wang\"}, {\"(...TRUNCATED)
|
SP:54d8b9e0de9f41beef2487127ba550ff6e4b275a
| "Temporal knowledge graph (TKG) reasoning has two settings: interpolation reasoning and extrapolatio(...TRUNCATED)
| "1 introduction :Knowledge graph (KG) is a semantic network that represents real-world facts in a st(...TRUNCATED)
| "Static KG reasoning methods can be summarized into three classes: the translation models (Bordes et(...TRUNCATED)
| "In this section, we introduce a novel temporal pathbased reasoning model with a neural-driven symbo(...TRUNCATED)
| null | "We propose a temporal path-based reasoning (TPAR) model with a neural-symbolic fashion that can be (...TRUNCATED)
| "Link prediction task that aims to infer incomplete time-wise fact with a missing entity ((s, r, ?, (...TRUNCATED)
| "To test the hypothesis that completing missing knowledge about the past can enhance the accuracy of(...TRUNCATED)
| "We identify that there may be some possible limitations in this study. First, our reasoning results(...TRUNCATED)
| "[{\"authors\": [\"John S. Baras\", \"George Theodorakopoulos.\"], \"title\": \"Path Problems in Net(...TRUNCATED)
|
acl_23
|
"Unsupervised Information Refinement Training of Large Language Models for Retrieval-Augmented Gener(...TRUNCATED)
| "[{\"affiliations\": [], \"name\": \"Shicheng Xu\"}, {\"affiliations\": [], \"name\": \"Liang Pang\"(...TRUNCATED)
|
SP:f538754179e8c3ef2a97459734a985f148f10524
| "Retrieval-augmented generation (RAG) enhances large language models (LLMs) by incorporating additio(...TRUNCATED)
| "1 introduction :Retrieval-augmented generation (RAG) is a popular framework in modern NLP systems t(...TRUNCATED)
| "Retrieval Augmented Generation Retrieval augmented generation (RAG) aims to provide addi-\ntional k(...TRUNCATED)
| null | null | "This paper proposes a novel perspective to reassess the role of LLMs in RAG that considers LLMs as (...TRUNCATED)
| "To demonstrate the generality of our unsupervised training method, we evaluate the performance of I(...TRUNCATED)
| null | null | "[{\"authors\": [\"Akari Asai\", \"Zeqiu Wu\", \"Yizhong Wang\", \"Avirup Sil\", \"Hannaneh Hajishir(...TRUNCATED)
|
acl_23
|
CSCD-NS: a Chinese Spelling Check Dataset for Native Speakers
| "[{\"affiliations\": [], \"name\": \"Yong Hu\"}, {\"affiliations\": [], \"name\": \"Fandong Meng\"},(...TRUNCATED)
|
SP:cd01bfaf6e7e593db2fb361479620de7a5e80a98
| "In this paper, we present CSCD-NS, the first Chinese spelling check (CSC) dataset designed for nati(...TRUNCATED)
| "1 introduction :Chinese spelling check (CSC) is a task to detect and correct spelling errors in Chi(...TRUNCATED)
| "CSC Datasets: The existing CSC datasets, such as the SIGHAN series (Wu et al., 2013; Yu et al., 201(...TRUNCATED)
| "License: CSCD-NS and the constructed pseudodata LCSTS-IME-2M are based on LCSTS (Hu et al., 2015), (...TRUNCATED)
| "The manual annotation of CSC dataset is very expensive, therefore, how to construct pseudo data has(...TRUNCATED)
| "In this paper, we focus on CSC for native speakers. For this scenario, we propose a new dataset, CS(...TRUNCATED)
| "In this section, we evaluate the performance of different models on CSCD-NS and compare different p(...TRUNCATED)
| "Consequently, enabling controlled text generation, addressing complex word-level and grammatical er(...TRUNCATED)
|
Limitation of the CSCD-NS dataset
| "[{\"authors\": [\"Baichuan.\"], \"title\": \"Baichuan 2: Open large-scale language models\", \"venu(...TRUNCATED)
|
acl_23
|
End of preview.