text stringlengths 0 16.9k | page_start int64 0 825 | page_end int64 0 825 | source_file stringclasses 99
values |
|---|---|---|---|
REROUTING LLM R OUTERS
A PREPRINT
Avital Shafran
The Hebrew University
of Jerusalem
Roei Schuster
Wild Moose
Thomas Ristenpart
Cornell Tech
Vitaly Shmatikov
Cornell Tech
ABSTRACT
LLM routers aim to balance quality and cost of generation by classifying queries and routing them to
a cheaper or more expensive LLM dependin... | 0 | 0 | arxiv1.pdf |
Figure 1: LLM routers classify queries and route complex ones to an expensive/strong model, others to a cheaper/weak
model. To control costs, LLM routers can be calibrated to maintain (for an expected workload) a specific ratio between
queries sent to the strong and weak models.
To initiate the study of this problem, w... | 1 | 1 | arxiv1.pdf |
In contrast to routers motivated by controlling costs, several LLM router designs focus solely on improving quality of
responses [31, 45, 57, 58].
The LLM routers described thus far do not modify the queries or individual LLM responses. Other types of control planes
do. Ensemble approaches such as mixture-of-expert (Mo... | 2 | 2 | arxiv1.pdf |
where I(ij) = 1 if ij = s and I(ij) = 0 if ij = w. In other words, the predicate is that the fraction of queries routed to the
strong model is bounded by ϵ.
Control plane integrity. A control plane integrity adversaryis a randomized algorithm A that seeks to maliciously guide
inference flow.
In an unconstrained LLM con... | 3 | 3 | arxiv1.pdf |
Figure 2: Overview of our attack on LLM routing control plane integrity. The attack adds to each query a prefix (repre-
sented by the gear), called a “confounder gadget,” that causes the router to send the query to the strong model.
We focus on the binary router setting in which the router applies a learned scoring fun... | 4 | 4 | arxiv1.pdf |
Let B = {˜c0, . . . ,˜cB}.
(3) Find the candidate that maximizes the score:
c(t+1)
i ← arg max
c∈B
Sθ(c∥xi) . (1)
The final confounder c(T)
i is used with query xi. We early abort if, after 25 iterations, there is no update to the confounder
gadget. Technically, we could abort early if we find a confounder whose score ... | 5 | 5 | arxiv1.pdf |
Routers Notation
Similarity-weighted ranking RSW
Matrix factorization RMF
BERT classifier RCLS
LLM scoring RLLM
LLM pair Strong (Ms) Weak (Mw)
1 Llama-3.1-8B 4-bit Mixtral 8x7B
2 Llama-3.1-8B Mistral-7B-Instruct-v0.3
3 Llama-3.1-8B Llama-2-7B-chat-hf
4 GPT-4-1106-preview 4-bit Mixtral 8x7B
Benchmark Description
MT-Benc... | 6 | 6 | arxiv1.pdf |
will be evaluated with respect to this pair, which we refer to as LLM pair 1. We performed more limited experiments with
the original strong, weak model pair (LLM pair 4) and had similar success in rerouting.
We additionally performed experiments with two further weaker models, in order to better evaluate the case wher... | 7 | 7 | arxiv1.pdf |
0 20 40 60
Iterations
0.220
0.225
0.230
0.235
0.240
0.245Routing score
Attack #0
Attack #1
Attack #2
Attack #3
Attack #4
Attack #5
Attack #6
Attack #7
Attack #8
Attack #9
(a) RSW
0 20 40 60
Iterations
0.2
0.4
0.6
0.8Routing score
Attack #0
Attack #1
Attack #2
Attack #3
Attack #4
Attack #5
Attack #6
Attack #7
Attack #8
... | 8 | 8 | arxiv1.pdf |
RSW RMF RCLS RLLM
Original Confounded Original Confounded Original Confounded Original Confounded
MT-Bench 13.8 12 .3 ± 0.2 12 .6 12 .3 ± 0.2 13 .1 12 .1 ± 0.2 12 .7 12 .7 ± 0.4
MMLU 20.4 20 .1 ± 0.1 20 .0 20 .3 ± 0.1 20 .2 20 .5 ± 0.1 21 .0 19 .6 ± 0.1
GSM8K 17.1 15 .1 ± 0.3 17 .0 15 .2 ± 0.3 17 .0 15 .0 ± 0.2 16 .4 1... | 9 | 9 | arxiv1.pdf |
RSW RMF RCLS RLLM
Orig. Conf. Orig. Conf. Orig. Conf. Orig. Conf.
LLM pair 2
MT-Bench 8.5 8 .3 ± 0.0 8.4 8 .3 ± 0.1 8.4 8 .4 ± 0.1 8.4 8 .3 ± 0.1
MMLU 55 64 ± 1 63 64 ± 0 58 66 ± 1 62 66 ± 0
GSM8K 46 64 ± 1 51 67 ± 1 49 63 ± 1 38 63 ± 2
LLM pair 3
MT-Bench 8.4 8 .3 ± 0.0 8.1 8 .3 ± 0.1 8.3 8 .4 ± 0.1 8.1 8 .2 ± 0.1
MML... | 10 | 10 | arxiv1.pdf |
Surrogate ˆRSW ˆRMF ˆRCLS ˆRLLM
Target RMF RCLS RLLM RSW RCLS RLLM RSW SFM RLLM RSW RMF RCLS
MT-Bench 0.4 0 .8 0 .6 1.4 0 .7 0 .3 1.7 0 .3 0 .7 0.8 −0.6 0 .0
MMLU 0.1 0 .8 1 .1 0.2 0 .2 1 .1 0.3 0 .8 0 .9 1.3 1 .2 0 .9
GSM8K 1.9 1 .7 0 .6 1.6 1 .7 0 .2 1.7 1 .0 0 .4 1.3 1 .3 1 .7
Table 6: Differences between average pe... | 11 | 11 | arxiv1.pdf |
RSW RMF RCLS RLLM
MT-Bench 100 100 100 100
MMLU 100 96 100 100
GSM8K 100 100 100 100
Table 8: Upgrade rates for query-specific gadgets, in the white-box setting. Results are nearly perfect, i.e. nearly all
confounded queries are routed to the strong model.
Surrogate ˆRSW ˆRMF ˆRCLS ˆRLLM
Target RMF RCLS RLLM RSW RCLS R... | 12 | 12 | arxiv1.pdf |
RSW RMF RCLS RLLM
Original Confounded Original Confounded Original Confounded Original Confounded
MT-Bench 9.2 9 .2 ± 0.0 9.1 9 .3 ± 0.0 9.2 9 .1 ± 0.0 8.9 9 .1 ± 0.1
MMLU 76 84 ± 1 76 81 ± 0 76 84 ± 0 78 84 ± 1
GSM8K 62 86 ± 0 65 88 ± 1 68 90 ± 2 66 85 ± 2
Table 10: Benchmark-specific average scores of responses to th... | 13 | 13 | arxiv1.pdf |
0 50 100 150 200 250 300
Perplexity
0
20
40
60
80Count
Original
Confounded
(a) RSW
20 40 60 80 100 120 140
Perplexity
0
10
20
30
40
50Count
Original
Confounded (b) RMF
50 100 150 200
Perplexity
0
10
20
30
40
50Count
Original
Confounded (c) RCLS
20 40 60 80 100
Perplexity
0
10
20
30
40
50Count
Original
Confounded (d) RL... | 14 | 14 | arxiv1.pdf |
20 30 40 50
Perplexity
0.0
2.5
5.0
7.5
10.0
12.5
15.0
17.5
20.0Count
Original
Confounded
(a) RSW
20 30 40 50
Perplexity
0
5
10
15
20Count
Original
Confounded (b) RMF
20 30 40 50
Perplexity
0
5
10
15
20Count
Original
Confounded (c) RCLS
20 30 40 50
Perplexity
0
5
10
15
20Count
Original
Confounded (d) RLLM
0.0 0.2 0.4 0.... | 15 | 15 | arxiv1.pdf |
an extra potentially expensive LLM invocation for each query processed by the router. Second, it may degrade the quality
of responses from the destination LLMs, which are sensitive to the phrasing of queries and prompts.
Detecting anomalous user workloads. Another possible defense requires the router to monitor individ... | 16 | 16 | arxiv1.pdf |
We introduced and defined a new safety property, LLM control plane integrity . Informally, this property holds if an
adversarial user cannot influence routing decisions made by the control plane. To show that existing LLM routers do not
satisfy this property, we designed, implemented, and evaluated a black-box optimiza... | 17 | 17 | arxiv1.pdf |
References
[1] “Chatbot Arena LLM Leaderboard: Community-driven evaluation for best LLM and AI chatbots,” https://
huggingface.co/spaces/lmarena-ai/chatbot-arena-leaderboard, accessed: 2024-11-14.
[2] “Hello gpt-4o,” https://openai.com/index/hello-gpt-4o/, published: 2024-05-23.
[3] “Introducing Llama 3.1: Our most cap... | 18 | 18 | arxiv1.pdf |
[26] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of deep bidirectional transformers for
language understanding,” in Proceedings of the 2019 Conference of the North American Chapter of the Association
for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), ... | 19 | 19 | arxiv1.pdf |
[48] N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami, “Practical black-box attacks against
machine learning,” in Proceedings of the 2017 ACM on Asia conference on computer and communications security,
2017.
[49] N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami, “The ... | 20 | 20 | arxiv1.pdf |
[71] L. Zheng, W.-L. Chiang, Y . Sheng, S. Zhuang, Z. Wu, Y . Zhuang, Z. Lin, Z. Li, D. Li, E. Xinget al., “Judging LLM-
as-a-judge with MT-Bench and chatbot arena,” Advances in Neural Information Processing Systems (NeurIPS) ,
2023.
[72] S. Zhu, R. Zhang, B. An, G. Wu, J. Barrow, Z. Wang, F. Huang, A. Nenkova, and T. ... | 21 | 21 | arxiv1.pdf |
RSW RMF RCLS RLLM
MT-Bench Prefix 100 ± 0 100 ± 0 100 ± 0 73 ± 5
Suffix 100 ± 0 100 ± 0 100 ± 0 84 ± 4
MMLU Prefix 90 ± 1 78 ± 4 100 ± 0 95 ± 1
Suffix 82 ± 2 63 ± 3 93 ± 1 93 ± 1
GSM8K Prefix 98 ± 0 100 ± 0 100 ± 0 100 ± 0
Suffix 94 ± 1 100 ± 0 100 ± 0 94 ± 3
Table 12: Average upgrade rates for different ways of adding... | 22 | 22 | arxiv1.pdf |
gadget RSW RMF RCLS RLLM
MT-Bench Init 7 3 8 3
Random 97 ± 2 37 ± 8 62 ± 10 38 ± 4
MMLU Init 21 4 0 13
Random 49 ± 5 6 ± 3 14 ± 7 68 ± 5
GSM8K Init 21 20 0 9
Random 58 ± 8 34 ± 8 37 ± 9 41 ± 7
Table 14: Average upgrade rates when the gadget is not optimized and is either defined to be the the initial set of tokens
or a... | 23 | 23 | arxiv1.pdf |
0 10 20 30 40 50 60 70
Perplexity
0
5
10
15
20Count
strong
weak
(a) MT-bench
ROCAUC=0.38
0 20 40 60
Perplexity
0.0
2.5
5.0
7.5
10.0
12.5
15.0
17.5
20.0Count
strong
weak
(b) MMLU
ROCAUC=0.47
0 20 40 60 80
Perplexity
0
5
10
15
20
25Count
strong
weak
(c) GSM8K
ROCAUC=0.38
Figure 7: Histograms of the perplexity values of c... | 24 | 24 | arxiv1.pdf |
A Primer in BERTology: What We Know About How BERT Works
Anna Rogers
Center for Social Data Science
University of Copenhagen
arogers@sodas.ku.dk
Olga Kovaleva
Dept. of Computer Science
University of Massachusetts Lowell
okovalev@cs.uml.edu
Anna Rumshisky
Dept. of Computer Science
University of Massachusetts Lowell
arum... | 0 | 0 | arxiv2_taclccby4_license.pdf |
3 What knowledge does BERT have?
A number of studies have looked at the knowledge
encoded in BERT weights. The popular approaches
include fill-in-the-gap probes of MLM, analysis of
self-attention weights, and probing classifiers with
different BERT representations as inputs.
3.1 Syntactic knowledge
Lin et al. (2019) show... | 1 | 1 | arxiv2_taclccby4_license.pdf |
report that an intermediate fine-tuning step with
supervised parsing does not make much difference
for downstream task performance.
3.2 Semantic knowledge
To date, more studies have been devoted to BERT’s
knowledge of syntactic rather than semantic phe-
nomena. However, we do have evidence from an
MLM probing study that... | 2 | 2 | arxiv2_taclccby4_license.pdf |
Diagonal Heterogeneous
Vertical Vertical + diagonal Block
[CLS] [CLS] [SEP] [SEP] [SEP] [SEP] [SEP] [SEP] [CLS] [CLS] [SEP] [SEP] [SEP] [SEP] [CLS]
Figure 3: Attention patterns in BERT (Kovaleva et al., 2019)
ies) insufficient (Warstadt et al., 2019). A given
method might also favor one model over another,
e.g., RoBERT... | 3 | 3 | arxiv2_taclccby4_license.pdf |
avenue for future work.
The above discussion concerns token embed-
dings, but BERT is typically used as a sentence or
text encoder. The standard way to generate sen-
tence or text representations for classification is
to use the [CLS] token, but alternatives are also
being discussed, including concatenation of token
rep... | 4 | 4 | arxiv2_taclccby4_license.pdf |
More recently, Kobayashi et al. (2020) showed
that the norms of attention-weighted input vec-
tors, which yield a more intuitive interpretation
of self-attention, reduce the attention to special to-
kens. However, even when the attention weights
are normed, it is still not the case that most heads
that do the "heavy li... | 5 | 5 | arxiv2_taclccby4_license.pdf |
layers are more transferable (Liu et al., 2019a). In
fine-tuning, it explains why the final layers change
the most (Kovaleva et al., 2019), and why restoring
the weights of lower layers of fine-tuned BERT to
their original values does not dramatically hurt the
model performance (Hao et al., 2019).
Tenney et al. (2019a) su... | 6 | 6 | arxiv2_taclccby4_license.pdf |
5.3 Pre-training BERT
The original BERT is a bidirectional Transformer
pre-trained on two tasks: next sentence prediction
(NSP) and masked language model (MLM) (sec-
tion 2). Multiple studies have come up with alter-
native training objectives to improve on BERT,
which could be categorized as follows:
• How to mask. Ra... | 7 | 7 | arxiv2_taclccby4_license.pdf |
Figure 5: Pre-trained weights help BERT find wider
optima in fine-tuning on MRPC (right) than training
from scratch (left) (Hao et al., 2019)
beddings as input for training BERT, while Po-
erner et al. (2019) adapt entity vectors to BERT
representations. As mentioned above, Wang et al.
(2020c) integrate knowledge not thr... | 8 | 8 | arxiv2_taclccby4_license.pdf |
be successfully approximated with adapter mod-
ules. They achieve competitive performance on
26 classification tasks at a fraction of the computa-
tional cost. Adapters in BERT were also used for
multi-task learning (Stickland and Murray, 2019)
and cross-lingual transfer (Artetxe et al., 2019). An
alternative to fine-tun... | 9 | 9 | arxiv2_taclccby4_license.pdf |
Compression Performance Speedup Model Evaluation
BERT-base (Devlin et al., 2019) ×1 100% ×1 BERT 12 All GLUE tasks, SQuAD
BERT-small ×3.8 91% - BERT 4† All GLUE tasks
Distillation
DistilBERT (Sanh et al., 2019a) ×1.5 90% § ×1.6 BERT 6 All GLUE tasks, SQuAD
BERT6-PKD (Sun et al., 2019a) ×1.6 98% ×1.9 BERT 6 No WNLI, CoL... | 10 | 10 | arxiv2_taclccby4_license.pdf |
then check which of them survive the pruning, find-
ing that the syntactic and positional heads are the
last ones to go. For BERT, Prasanna et al. (2020)
go in the opposite direction: pruning on the basis of
importance scores, and interpreting the remaining
"good" subnetwork. With respect to self-attention
heads specific... | 11 | 11 | arxiv2_taclccby4_license.pdf |
References
Gustavo Aguilar, Yuan Ling, Yu Zhang, Benjamin
Yao, Xing Fan, and Edward Guo. 2019. Knowl-
edge Distillation from Internal Representations.
arXiv preprint arXiv:1910.03723.
Alan Akbik, Tanja Bergmann, and Roland V oll-
graf. 2019. Pooled Contextualized Embeddings
for Named Entity Recognition. In Proceedings
... | 12 | 12 | arxiv2_taclccby4_license.pdf |
Sastry, Amanda Askell, Sandhini Agarwal, Ariel
Herbert-V oss, Gretchen Krueger, Tom Henighan,
Rewon Child, Aditya Ramesh, Daniel M.
Ziegler, Jeffrey Wu, Clemens Winter, Christo-
pher Hesse, Mark Chen, Eric Sigler, Mateusz
Litwin, Scott Gray, Benjamin Chess, Jack Clark,
Christopher Berner, Sam McCandlish, Alec
Radford, ... | 13 | 13 | arxiv2_taclccby4_license.pdf |
Jesse Dodge, Gabriel Ilharco, Roy Schwartz, Ali
Farhadi, Hannaneh Hajishirzi, and Noah Smith.
2020. Fine-Tuning Pretrained Language Models:
Weight Initializations, Data Orders, and Early
Stopping. arXiv:2002.06305 [cs].
Yanai Elazar, Shauli Ravfogel, Alon Jacovi, and
Yoav Goldberg. 2020. When Bert Forgets How
To POS: A... | 14 | 14 | arxiv2_taclccby4_license.pdf |
Kong, China. Association for Computational
Linguistics.
John Hewitt and Christopher D. Manning. 2019.
A Structural Probe for Finding Syntax in Word
Representations. In Proceedings of the 2019
Conference of the North American Chapter of
the Association for Computational Linguistics:
Human Language Technologies, Volume 1... | 15 | 15 | arxiv2_taclccby4_license.pdf |
International Conference on Learning Represen-
tations.
Olga Kovaleva, Alexey Romanov, Anna Rogers,
and Anna Rumshisky. 2019. Revealing the Dark
Secrets of BERT. In Proceedings of the 2019
Conference on Empirical Methods in Natural
Language Processing and the 9th International
Joint Conference on Natural Language Proce... | 16 | 16 | arxiv2_taclccby4_license.pdf |
End of preview. Expand in Data Studio
README.md exists but content is empty.
- Downloads last month
- 147