id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
2401.04088#0
Mixtral of Experts
4 2 0 2 n a J 8 ] G L . s c [ 1 v 8 8 0 4 0 . 1 0 4 2 : v i X r a # Mixtral of Experts Albert Q. Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lél...
2401.04088#1
2401.04088
[ "1905.07830" ]
2401.04088#1
Mixtral of Experts
Code: https://github.com/mistralai/mistral-src Webpage: https://mistral.ai/news/mixtral-of-experts/ # Introduction In this paper, we present Mixtral 8x7B, a sparse mixture of experts model (SMoE) with open weights, licensed under Apache 2.0. Mixtral outperforms Llama 2 70B and GPT-3.5 on most benchmarks. As it only use...
2401.04088#0
2401.04088#2
2401.04088
[ "1905.07830" ]
2401.04088#2
Mixtral of Experts
expertsâ ) to process the token and combine their output additively. This technique increases the number of parameters of a model while controlling cost and latency, as the model only uses a fraction of the total set of parameters per token. Mixtral is pretrained with multilingual data using a context size of 32k token...
2401.04088#1
2401.04088#3
2401.04088
[ "1905.07830" ]
2401.04088#3
Mixtral of Experts
Instruct, a chat model fine-tuned to follow instructions using supervised fine-tuning and Direct Preference Optimization [25]. Its performance notably surpasses that of GPT-3.5 Turbo, Claude-2.1, Gemini Pro, and Llama 2 70B â chat model on human evaluation benchmarks. Mixtral â Instruct also demonstrates reduced biases...
2401.04088#2
2401.04088#4
2401.04088
[ "1905.07830" ]
2401.04088#4
Mixtral of Experts
# 2 Architectural details Mixtral is based on a transformer architecture [31] and uses the same modifications as described in [18], with the notable exceptions that Mix- tral supports a fully dense context length of 32k tokens, and the feed- forward blocks are replaced by Mixture-of-Expert layers (Section 2.1). The mod...
2401.04088#3
2401.04088#5
2401.04088
[ "1905.07830" ]
2401.04088#5
Mixtral of Experts
# j nâ G(x)i · Ei(x). i=0 Here, G(x)i denotes the n-dimensional output of the gating network for the i-th expert, and Ei(x) is the output of the i-th expert network. If the gating vector is sparse, we can avoid computing the outputs of experts whose gates are zero. There are multiple alternative ways of implementing G...
2401.04088#4
2401.04088#6
2401.04088
[ "1905.07830" ]
2401.04088#6
Mixtral of Experts
# 1https://mistral.ai/news/mixtral-of-experts/ 2 4096 32 128 14336 32 8 32768 32000 8 2 can increase the modelâ s parameter count while keeping its computational cost effectively constant. This motivates a distinction between the modelâ s total parameter count (commonly referenced as the sparse parameter count), which ...
2401.04088#5
2401.04088#7
2401.04088
[ "1905.07830" ]
2401.04088#7
Mixtral of Experts
We compare Mixtral to Llama, and re-run all benchmarks with our own evaluation pipeline for fair comparison. We measure performance on a wide variety of tasks categorized as follow: â ¢ Commonsense Reasoning (0-shot): Hellaswag [32], Winogrande [26], PIQA [3], SIQA [27], OpenbookQA [22], ARC-Easy, ARC-Challenge [8], Co...
2401.04088#6
2401.04088#8
2401.04088
[ "1905.07830" ]
2401.04088#8
Mixtral of Experts
3 Active Params MMLU HellaS WinoG PIQA Arc-e Arc-c NQ TriQA HumanE MBPP Math GSM8K 7B 44.4% 77.1% 69.5% 77.9% 68.7% 43.2% 17.5% 56.6% 11.6% 26.1% 3.9% 16.0% 13B 55.6% 80.7% 72.9% 80.8% 75.2% 48.8% 16.7% 64.0% 18.9% 35.4% 6.0% 34.3% 33B 56.8% 83.7% 76.2% 82.2% 79.6% 54.4% 24.1% 68.5% 25.0% 40.9% 8.4% 44.1% 70B 69.9% 85....
2401.04088#7
2401.04088#9
2401.04088
[ "1905.07830" ]
2401.04088#9
Mixtral of Experts
Mistral 78 % 2681 Mistral 78 3 3 s0 5 = A % 66 50 g 4 45 64 78 138 348708 78 138 348708 78 138 348 70B S66 Mixtral 8x7B 50 Mixtral 8x7B 5 = 564 340 g al Mistral 78 ee Mistral 78 3 5 § 30 5 eo â = Mistral ° 20 â e LlaMA2 78 (138 348 70B 7B (138 348 708 7B «13B 34B 708 Active Params Active Params Active Params Figure ...
2401.04088#8
2401.04088#10
2401.04088
[ "1905.07830" ]
2401.04088#10
Mixtral of Experts
The memory costs for serving Mixtral are proportional to its sparse parameter count, 47B, which is still smaller than Llama 2 70B. As for device utilization, we note that the SMoEs layer introduces additional overhead due to the routing mechanism and due to the increased memory loads when running more than one expert p...
2401.04088#9
2401.04088#11
2401.04088
[ "1905.07830" ]
2401.04088#11
Mixtral of Experts
4 LLaMA 2 70B GPT-3.5 MMLU (MCQ in 57 subjects) 69.9% 70.0% 70.6% HellaSwag (10-shot) 87.1% 85.5% 86.7% ARC Challenge (25-shot) 85.1% 85.2% 85.8% WinoGrande (5-shot) 83.2% 81.6% 81.2% MBPP (pass@1) 49.8% 52.2% 60.7% GSM-8K (5-shot) 53.6% 57.1% 58.4% MT Bench (for Instruct Models) 6.86 8.32 8.30 # Mixtral 8x7B Table 3: ...
2401.04088#10
2401.04088#12
2401.04088
[ "1905.07830" ]
2401.04088#12
Mixtral of Experts
Active Params French Arc-c HellaS MMLU German Arc-c HellaS MMLU Spanish Arc-c HellaS MMLU Italian Arc-c HellaS MMLU 33B 70B 13B 42.9% 65.4% 49.0% 39.3% 68.1% 49.9% 49.9% 72.5% 64.3% 49.4% 70.9% 65.1% 58.2% 77.4% 70.9% 54.3% 73.0% 71.5% 55.4% 77.6% 72.5% 52.8% 75.1% 70.9% 41.1% 63.3% 48.7% 47.3% 68.7% 64.2% 45.7% 69.8% ...
2401.04088#11
2401.04088#13
2401.04088
[ "1905.07830" ]
2401.04088#13
Mixtral of Experts
# 3.2 Long range performance To assess the capabilities of Mixtral to tackle long context, we evaluate it on the passkey retrieval task introduced in [23], a synthetic task designed to measure the ability of the model to retrieve a passkey inserted randomly in a long prompt. Results in Figure 4 (Left) show that Mixtral...
2401.04088#12
2401.04088#14
2401.04088
[ "1905.07830" ]
2401.04088#14
Mixtral of Experts
BBQ is a dataset of hand-written question sets that target attested social biases against nine differ- ent socially-relevant categories: age, dis- ability status, gender identity, nationality, physical appearance, race/ethnicity, religion, socio-economic status, sexual orientation. BOLD is a large-scale dataset that co...
2401.04088#13
2401.04088#15
2401.04088
[ "1905.07830" ]
2401.04088#15
Mixtral of Experts
# Instruction Fine-tuning We train Mixtral â Instruct using supervised fine-tuning (SFT) on an instruction dataset followed by Direct Preference Optimization (DPO) [25] on a paired feedback dataset. Mixtral â Instruct reaches a score of 8.30 on MT-Bench [33] (see Table 2), making it the best open-weights model as of De...
2401.04088#14
2401.04088#16
2401.04088
[ "1905.07830" ]
2401.04088#16
Mixtral of Experts
Figure 6: LMSys Leaderboard. (Screenshot from Dec 22, 2023) Mixtral 8x7B Instruct v0.1 achieves an Arena Elo rating of 1121 outperforming Claude-2.1 (1117), all versions of GPT-3.5-Turbo (1117 best), Gemini Pro (1111), and Llama-2-70b-chat (1077). Mixtral is currently the best open-weights model by a large margin. 3htt...
2401.04088#15
2401.04088#17
2401.04088
[ "1905.07830" ]
2401.04088#17
Mixtral of Experts
in English often get routed through the same expert even though they involve multiple tokens. Similarly, in code, the indentation tokens are always assigned to the same experts, particularly at the first and last layers where the hidden states are more correlated to the input and output of the model. We also note from ...
2401.04088#16
2401.04088#18
2401.04088
[ "1905.07830" ]
2401.04088#18
Mixtral of Experts
Layer 0 First choice Layer 15 Layer 31 Layer 0 First or second choice Layer 15 Layer 31 ArXiv DM Mathematics Github Gutenberg PhilPapers PubMed Abstracts StackExchange Wikipedia (en) 14.0% 14.1% 14.9% 13.9% 13.6% 14.2% 13.6% 14.4% 27.9% 28.4% 28.1% 26.1% 25.3% 24.6% 27.2% 23.6% 22.7% 19.7% 19.7% 26.3% 22.1% 22.0% 23.6%...
2401.04088#17
2401.04088#19
2401.04088
[ "1905.07830" ]
2401.04088#19
Mixtral of Experts
Repetitions at the first layer are close to random, but are significantly higher at layers 15 and 31. The high number of repetitions shows that expert choice exhibits high temporal locality at these layers. consecutive assignments is significantly higher than random for higher layers. This has implications in how one m...
2401.04088#18
2401.04088#20
2401.04088
[ "1905.07830" ]
2401.04088#20
Mixtral of Experts
init__(self, experts//List [nn.Modutel,) | Super (V7 init assert len(experts) > 0 self. experts = nn.ModuleList((experts) self. gate = gate self.args = moe_args def forward(self, inputs: torch.Tensor): inputs _squashed = inputs. view(-1,_ inputs.| gate_logits = self.gatel inputs_squashed) weights, selected_experts = to...
2401.04088#19
2401.04088#21
2401.04088
[ "1905.07830" ]
2401.04088#21
Mixtral of Experts
Tensor): inputs_squashed = inputs.view(=1, inputs) gate_logits = self.gate( inputs_squashed) weights, selected_experts = torch. topk( gate_logits, self.argssfum_experts_pe weights) nni.unct iorial.isoftinax( YP Yiitype_as (inputs) results = torch. zerosillikel(inputslisquashe| for i, expert in enunerate(self.experts): ...
2401.04088#20
2401.04088#22
2401.04088
[ "1905.07830" ]
2401.04088#22
Mixtral of Experts
lying in stitt air is (A) the same (18) greater) (C) less (D)! either grea lor less depending on wind speed H model airplane flies slower! when flying inte th wind and faster with wind at its backâ . When Launcl [right angles to the wind, a cross wind, its grounc Icompared with flying in still air is (A) the sane (B) g...
2401.04088#21
2401.04088#23
2401.04088
[ "1905.07830" ]
2401.04088#23
Mixtral of Experts
8 # Acknowledgements We thank the CoreWeave and Scaleway teams for technical support as we trained our models. We are grateful to NVIDIA for supporting us in integrating TensorRT-LLM and Triton and working alongside us to make a sparse mixture of experts compatible with TensorRT-LLM. # References [1] Jacob Austin, Augu...
2401.04088#22
2401.04088#24
2401.04088
[ "1905.07830" ]
2401.04088#24
Mixtral of Experts
Llemma: An open language model for mathematics. arXiv preprint arXiv:2310.10631, 2023. [3] Yonatan Bisk, Rowan Zellers, Jianfeng Gao, Yejin Choi, et al. Piqa: Reasoning about phys- ical commonsense in natural language. In Proceedings of the AAAI conference on artificial intelligence, pages 7432â 7439, 2020. [4] Mark Ch...
2401.04088#23
2401.04088#25
2401.04088
[ "1905.07830" ]
2401.04088#25
Mixtral of Experts
Quac: Question answering in context. arXiv preprint arXiv:1808.07036, 2018. [6] Aidan Clark, Diego De Las Casas, Aurelia Guy, Arthur Mensch, Michela Paganini, Jordan Hoffmann, Bogdan Damoc, Blake Hechtman, Trevor Cai, Sebastian Borgeaud, et al. Unified scaling laws for routed language models. In International Conferenc...
2401.04088#24
2401.04088#26
2401.04088
[ "1905.07830" ]
2401.04088#26
Mixtral of Experts
Boolq: Exploring the surprising difficulty of natural yes/no questions. arXiv preprint arXiv:1905.10044, 2019. [8] Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge. arXiv preprint...
2401.04088#25
2401.04088#27
2401.04088
[ "1905.07830" ]
2401.04088#27
Mixtral of Experts
Bold: Dataset and metrics for measuring biases in open-ended language generation. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency, pages 862â 872, 2021. [11] Artyom Eliseev and Denis Mazur. Fast inference of mixture-of-experts language models with offloading. arXiv preprint arXiv...
2401.04088#26
2401.04088#28
2401.04088
[ "1905.07830" ]
2401.04088#28
Mixtral of Experts
[14] Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, et al. The pile: An 800gb dataset of diverse text for language modeling. arXiv preprint arXiv:2101.00027, 2020. [15] Hussein Hazimeh, Zhe Zhao, Aakanksha Chowdhery, Maheswaran Sa...
2401.04088#27
2401.04088#29
2401.04088
[ "1905.07830" ]
2401.04088#29
Mixtral of Experts
Dselect-k: Differentiable selection in the mixture of experts with applications to multi-task learning. Advances in Neural Information Processing Systems, 34:29335â 29347, 2021. 9 [16] Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask lang...
2401.04088#28
2401.04088#30
2401.04088
[ "1905.07830" ]
2401.04088#30
Mixtral of Experts
Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. arXiv preprint arXiv:1705.03551, 2017. [20] Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al. Natural question...
2401.04088#29
2401.04088#31
2401.04088
[ "1905.07830" ]
2401.04088#31
Mixtral of Experts
Landmark attention: Random-access infinite context length for transformers. arXiv preprint arXiv:2305.16300, 2023. [24] Alicia Parrish, Angelica Chen, Nikita Nangia, Vishakh Padmakumar, Jason Phang, Jana Thomp- son, Phu Mon Htut, and Samuel R Bowman. Bbq: A hand-built bias benchmark for question answering. arXiv prepri...
2401.04088#30
2401.04088#32
2401.04088
[ "1905.07830" ]
2401.04088#32
Mixtral of Experts
[26] Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. Winogrande: An adversarial winograd schema challenge at scale. Communications of the ACM, pages 99â 106, 2021. [27] Maarten Sap, Hannah Rashkin, Derek Chen, Ronan LeBras, and Yejin Choi. Socialiqa: Com- monsense reasoning about social interacti...
2401.04088#31
2401.04088#33
2401.04088
[ "1905.07830" ]
2401.04088#33
Mixtral of Experts
[28] Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. arXiv preprint arXiv:1701.06538, 2017. [29] Mirac Suzgun, Nathan Scales, Nathanael Schärli, Sebastian Gehrmann, Yi Tay, Hyung Wo...
2401.04088#32
2401.04088#34
2401.04088
[ "1905.07830" ]
2401.04088#34
Mixtral of Experts
Attention is all you need. Advances in neural information processing systems, 30, 2017. [32] Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. Hellaswag: Can a machine really finish your sentence? arXiv preprint arXiv:1905.07830, 2019. [33] Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang,...
2401.04088#33
2401.04088#35
2401.04088
[ "1905.07830" ]
2401.04088#35
Mixtral of Experts
10 [34] Wanjun Zhong, Ruixiang Cui, Yiduo Guo, Yaobo Liang, Shuai Lu, Yanlin Wang, Amin Saied, Weizhu Chen, and Nan Duan. Agieval: A human-centric benchmark for evaluating foundation models. arXiv preprint arXiv:2304.06364, 2023. [35] Yanqi Zhou, Tao Lei, Hanxiao Liu, Nan Du, Yanping Huang, Vincent Zhao, Andrew M Dai, ...
2401.04088#34
2401.04088#36
2401.04088
[ "1905.07830" ]
2401.04088#36
Mixtral of Experts
ArXiv Github PhilPapers. StackExchange |_| | |_| | | DM Mathematics | Gutenberg || PubMed Abstracts | Wikipedia (en) Figure 9: Proportion of tokens assigned to each expert on different subsets from The Pile dataset, separated by whether the expert was selected as first or second choice, or either. The â Either choiceâ ...
2401.04088#35
2401.04088#37
2401.04088
[ "1905.07830" ]
2401.04088#37
Mixtral of Experts
Wikipedia (en) # Abstracts Figure 10: Repeated consecutive assignments per MoE layer. Repeated assignments occur a lot more often than they would with uniform assignments (materialized by the dashed lines). Patterns are similar across datasets with less repetitions for DM Mathematics. 13
2401.04088#36
2401.04088
[ "1905.07830" ]
2312.17238#0
Fast Inference of Mixture-of-Experts Language Models with Offloading
3 2 0 2 c e D 8 2 ] G L . s c [ 1 v 8 3 2 7 1 . 2 1 3 2 : v i X r a # Fast Inference of Mixture-of-Experts Language Models with Offloading Artyom Eliseev Moscow Institute of Physics and Technology Yandex School of Data Analysis lavawolfiee@gmail.com # Denis Mazur Moscow Institute of Physics and Technology Yandex Resear...
2312.17238#1
2312.17238
[ "2302.13971" ]
2312.17238#1
Fast Inference of Mixture-of-Experts Language Models with Offloading
With the widespread adoption of Large Language Models (LLMs), many deep learning practitioners are looking for strategies of running these models more efficiently. One such strategy is to use sparse Mixture-of-Experts (MoE) â a type of model architectures where only a fraction of model layers are active for any given i...
2312.17238#0
2312.17238#2
2312.17238
[ "2302.13971" ]
2312.17238#2
Fast Inference of Mixture-of-Experts Language Models with Offloading
To use these LLMs on more affordable hardware setups, one must either compress model parameters (Dettmers et al., 2022; Frantar et al., 2022) or offload parameters to a cheaper storage, be it RAM or SSD (Pudipeddi et al., 2020; Sheng et al., 2023). Several recent works modify transformer architecture by introducing spa...
2312.17238#1
2312.17238#3
2312.17238
[ "2302.13971" ]
2312.17238#3
Fast Inference of Mixture-of-Experts Language Models with Offloading
for any single forward pass, allowing for more compute-efficient training Fedus et al. (2021); Du et al. (2022). Notably, MoEs are among the largest Fedus et al. (2021) and among the best Mixtral AI team (2023) of available LLMs. While Mixture-of-Experts models can be more efficient than their dense counterparts, many ...
2312.17238#2
2312.17238#4
2312.17238
[ "2302.13971" ]
2312.17238#4
Fast Inference of Mixture-of-Experts Language Models with Offloading
â ¢ we consider the specific scenario of running Mixtral-8x7B-Instruct on a T4, RTX 3060 and RTX 3080 Mobile and develop a practical combination of mixed quantization and the proposed offloading algorithm to run this model interactively at 2-3 tokens per second depending on the hardware. The source code with our implem...
2312.17238#3
2312.17238#5
2312.17238
[ "2302.13971" ]
2312.17238#5
Fast Inference of Mixture-of-Experts Language Models with Offloading
Most of these models follow traditional (dense) Transformer architecture for embeddings and attention layers, and only use Mixture for the feedforward (MLP) blocks and use a linear token-level gating function. A common observation across most of these works is that MoE models are cheaper to train and inference Fedus et...
2312.17238#4
2312.17238#6
2312.17238
[ "2302.13971" ]
2312.17238#6
Fast Inference of Mixture-of-Experts Language Models with Offloading
2 factorization Hsu et al. (2022), or a combination thereof. These compression types are not specific to LLMs and are based on much older methods outside the scope of our work4. However, recent works found that there are unique challenges to quantizing very large transformer-based language models due to emergent outlie...
2312.17238#5
2312.17238#7
2312.17238
[ "2302.13971" ]
2312.17238#7
Fast Inference of Mixture-of-Experts Language Models with Offloading
s parameters to be loaded. # 2.4 Hardware Setup While our analysis is not specific to any hardware setup, we target the hardware specifications of cheap / free-tier cloud instances Google (2023) and the upper half of gaming computers Steam (2023): i) enough system memory to hold model parameters, ii) a GPU with 11-16GB...
2312.17238#6
2312.17238#8
2312.17238
[ "2302.13971" ]
2312.17238#8
Fast Inference of Mixture-of-Experts Language Models with Offloading
# 3 Method In this work, we aim to systematically find the optimal way to inference modern Mixture-of-Experts LLMs on desktop or low-end cloud instances. More specifically, we focus on the task of generating tokens interactively, i.e. generate multiple tokens per second at batch size 15. The generative inference worklo...
2312.17238#7
2312.17238#9
2312.17238
[ "2302.13971" ]
2312.17238#9
Fast Inference of Mixture-of-Experts Language Models with Offloading
3 Selected experts for Mixtral-8x7B-Instruct woe 0 (top) and 15 ae =n a oa ao a â me: a n: ee Layer 15 expert # Layer 0 expert # MAUR STARR O However about |= and 4 training data owerful language model based trained Trans former f architecture Figure 1: An example of expert loading pattern in Mixtral-8x7B-Instruct for ...
2312.17238#8
2312.17238#10
2312.17238
[ "2302.13971" ]
2312.17238#10
Fast Inference of Mixture-of-Experts Language Models with Offloading
The reason behind this is that, unlike with dense models, MoE offloading cannot effectively overlap expert loading with computation. To understand this problem, let us zoom into the process of generating a single token, layer-by-layer. The full compute workload starts by embedding the previous token via look-up, then a...
2312.17238#9
2312.17238#11
2312.17238
[ "2302.13971" ]
2312.17238#11
Fast Inference of Mixture-of-Experts Language Models with Offloading
4 While analyzing modern MoE models, we found that it is possible to get an accurate guess of next layerâ s experts by applying next layerâ s gating function to previous layerâ s hidden states â or, more specifically, to the same hidden states that are used by previous MoE layerâ s gating function. This heuristic relie...
2312.17238#10
2312.17238#12
2312.17238
[ "2302.13971" ]
2312.17238#12
Fast Inference of Mixture-of-Experts Language Models with Offloading
We trigger speculative expert loading immediately after the system finished loading all experts for the current layer. The speculative expert loading fetches 1 â 2 most likely experts. The newly loaded experts do not replace the currently cached experts. If a speculatively loaded expert was later used during next layer...
2312.17238#11
2312.17238#13
2312.17238
[ "2302.13971" ]
2312.17238#13
Fast Inference of Mixture-of-Experts Language Models with Offloading
5 iy & cache_size =3 cache_size = 2 cache_size =4 0.84 | PIO â prefetch 1 experts ~ escent ae | PRS aa 0.2} â â prefetch 2 experts â â prefetch 3 experts 0.0 00 0 5 10 15 20 25 30 0 5 10 15 20 25 30 Layer # Layer # S Fd Ed Cache hit rate Bd ES Prediction recall = ES Ss & Figure 2: (left) LRU cache hit ratio for differe...
2312.17238#12
2312.17238#14
2312.17238
[ "2302.13971" ]
2312.17238#14
Fast Inference of Mixture-of-Experts Language Models with Offloading
We measure WikiText2 perpliexity Merity et al. (2016), C4 perplexity Raffel et al. (2020), as well as 5-shot MMLU accuracy Hendrycks et al. (2021). Our objective for this section is to find the best trade off between size and performance for offloading with the target setups. Note that out of 46.7B total parameters in ...
2312.17238#13
2312.17238#15
2312.17238
[ "2302.13971" ]
2312.17238#15
Fast Inference of Mixture-of-Experts Language Models with Offloading
Green values correspond to the configurations we chose for full system evaluation. 6 Algorithm 2-bit Experts 3-bit Experts A100 3080 Mobile 3060 T4 (Colab) A100 3080 Mobile 3060 T4 (Cloud) 3.061 Full algorithm 2.918 W/o expert pre-loading 2.265 W/o LRU cache & pre-loading Naive offloading (accelerate) 1.392 2.655 2.227...
2312.17238#14
2312.17238#16
2312.17238
[ "2302.13971" ]
2312.17238#16
Fast Inference of Mixture-of-Experts Language Models with Offloading
1. FP16 (no quantization) 2. HQQ 4-bit with group size 64, scale group size 256 3. HQQ 3-bit with group size 64, scale group size 128 4. HQQ 2-bit with group size 16, scale group size 128 Note that the actual model size with n-bit quantization is larger than n bits per parameter. This is because the quantized data form...
2312.17238#15
2312.17238#17
2312.17238
[ "2302.13971" ]
2312.17238#17
Fast Inference of Mixture-of-Experts Language Models with Offloading
Finally, when evaluating 3-bit models, we use a cloud T4 from Microsoft Azure because the free-tier colab instances did not have enough RAM for this specific configuration. We use k = 2 for RTX 3060 and k = 4 for all other GPUs. As shown in Table 2, all evaluated setups can generate 2-4 tokens per second with the full ...
2312.17238#16
2312.17238#18
2312.17238
[ "2302.13971" ]
2312.17238#18
Fast Inference of Mixture-of-Experts Language Models with Offloading
7 and explore how mixed quantization affects perplexity and performance on language understanding tasks. We evaluate the proposed strategies and show that they produce a significant increase in generation speed compared to na¨ve approaches on consumer-grade hardware, including free-tier Google Colab. Our method provid...
2312.17238#17
2312.17238#19
2312.17238
[ "2302.13971" ]
2312.17238#19
Fast Inference of Mixture-of-Experts Language Models with Offloading
# Acknowledgements Authors would like to acknowledge mobicham@ for helpful discussions on Mixtral quantization. # References Aminabadi, R. Y., Rajbhandari, S., Awan, A. A., Li, C., Li, D., Zheng, E., Ruwase, O., Smith, S., Zhang, M., Rasley, J., and He, Y. Deepspeed-inference: Enabling efficient inference of transforme...
2312.17238#18
2312.17238#20
2312.17238
[ "2302.13971" ]
2312.17238#20
Fast Inference of Mixture-of-Experts Language Models with Offloading
22. IEEE Press, 2022. ISBN 9784665454445. Badri, H. and Shaji, A. Half-quadratic quantization of large machine learning models, November 2023. URL https://mobiusml.github.io/hqq_blog/. Biderman, S., Schoelkopf, H., Anthony, Q., Bradley, H., Oâ Brien, K., Hallahan, E., Khan, M. A., Purohit, S., Prashanth, U. S., Raff, E...
2312.17238#19
2312.17238#21
2312.17238
[ "2302.13971" ]
2312.17238#21
Fast Inference of Mixture-of-Experts Language Models with Offloading
Pythia: A suite for analyzing large language models across training and scaling. arXiv preprint arXiv:2304.01373, 2023. Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., et al. Language models are few-shot learners. In Conference on Neural Info...
2312.17238#20
2312.17238#22
2312.17238
[ "2302.13971" ]
2312.17238#22
Fast Inference of Mixture-of-Experts Language Models with Offloading
Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022. Collobert, R., Bengio, S., and Bengio, Y. A parallel mixture of svms for very large scale problems. In Advances in Neural Information Processing Systems, pp. 633â 640, 2002. Dettmers, T. and Zettlemoyer, L. The case for 4-bit precisio...
2312.17238#21
2312.17238#23
2312.17238
[ "2302.13971" ]
2312.17238#23
Fast Inference of Mixture-of-Experts Language Models with Offloading
Spqr: A sparse-quantized representation for near-lossless llm weight compression. arXiv preprint arXiv:2306.03078, 2023. Du, N., Huang, Y., Dai, A. M., Tong, S., Lepikhin, D., Xu, Y., Krikun, M., Zhou, Y., Yu, A. W., Firat, O., Zoph, B., Fedus, L., Bosma, M., Zhou, Z., Wang, T., Wang, Y. E., Webster, K., Pellat, M., Ro...
2312.17238#22
2312.17238#24
2312.17238
[ "2302.13971" ]
2312.17238#24
Fast Inference of Mixture-of-Experts Language Models with Offloading
Glam: Efficient scaling of language models with mixture-of-experts, 2022. Fedus, W., Zoph, B., and Shazeer, N. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. arXiv preprint arXiv:2101.03961, 2021. 8 Frantar, E. and Alistarh, D. SparseGPT: Massive language models can be acc...
2312.17238#23
2312.17238#25
2312.17238
[ "2302.13971" ]
2312.17238#25
Fast Inference of Mixture-of-Experts Language Models with Offloading
Frantar, E. and Alistarh, D. Qmoe: Practical sub-1-bit compression of trillion-parameter models, 2023b. Frantar, E., Ashkboos, S., Hoefler, T., and Alistarh, D. Gptq: Accurate post-training quantization for generative pre-trained transformers. arXiv preprint arXiv:2210.17323, 2022. Gholami, A., Kim, S., Dong, Z., Yao, ...
2312.17238#24
2312.17238#26
2312.17238
[ "2302.13971" ]
2312.17238#26
Fast Inference of Mixture-of-Experts Language Models with Offloading
Google. Google colaboratory, 2023. URL https://colab.research.google.com/. Hendrycks, D., Burns, C., Basart, S., Zou, A., Mazeika, M., Song, D., and Steinhardt, J. Measuring massive multitask language understanding. Proceedings of the International Conference on Learning Representations (ICLR), 2021. Hsu, Y.-C., Hua, T...
2312.17238#25
2312.17238#27
2312.17238
[ "2302.13971" ]
2312.17238#27
Fast Inference of Mixture-of-Experts Language Models with Offloading
Openassistant conversations â democratizing large language model alignment, 2023. Lample, G., Sablayrolles, A., Ranzato, M. A., Denoyer, L., and Jegou, H. Large memory layers with product keys. In Wallach, H., Larochelle, H., Beygelzimer, A., dà lché-Buc, F., Fox, E., and Garnett, R. (eds.), Advances in Neural Informa...
2312.17238#26
2312.17238#28
2312.17238
[ "2302.13971" ]
2312.17238#28
Fast Inference of Mixture-of-Experts Language Models with Offloading
8557. Curran Associates, Inc., 2019. URL http://papers.nips.cc/paper/9061-large-memory-layers- with-product-keys.pdf. Lepikhin, D., Lee, H., Xu, Y., Chen, D., Firat, O., Huang, Y., Krikun, M., Shazeer, N., and Chen, Z. Gshard: Scaling giant models with conditional computation and automatic sharding. arXiv preprint arXi...
2312.17238#27
2312.17238#29
2312.17238
[ "2302.13971" ]
2312.17238#29
Fast Inference of Mixture-of-Experts Language Models with Offloading
A survey. CoRR, abs/2101.09671, 2021. URL https://arxiv.org/abs/2101.09671. Lin, J., Tang, J., Tang, H., Yang, S., Dang, X., and Han, S. Awq: Activation-aware weight quantization for llm compression and acceleration. arXiv preprint arXiv:2306.00978, 2023. Ma, X., Fang, G., and Wang, X. Llm-pruner: On the structural pru...
2312.17238#28
2312.17238#30
2312.17238
[ "2302.13971" ]
2312.17238#30
Fast Inference of Mixture-of-Experts Language Models with Offloading
9 Nagel, M., Amjad, R. A., Van Baalen, M., Louizos, C., and Blankevoort, T. Up or down? Adaptive rounding for post-training quantization. In International Conference on Machine Learning (ICML), 2020. OpenAI. Gpt-4 technical report. arXiv, 2023. Pudipeddi, B., Mesmakhosroshahi, M., Xi, J., and Bharadwaj, S.
2312.17238#29
2312.17238#31
2312.17238
[ "2302.13971" ]
2312.17238#31
Fast Inference of Mixture-of-Experts Language Models with Offloading
Training large neural networks with constant memory using a new execution algorithm. CoRR, abs/2002.05645, 2020. URL https://arxiv.org/abs/2002.05645. Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., and Liu, P. Exploring the limits of transfer learning with a unified text-to-tex...
2312.17238#30
2312.17238#32
2312.17238
[ "2302.13971" ]
2312.17238#32
Fast Inference of Mixture-of-Experts Language Models with Offloading
67, 2020. Ren, J., Rajbhandari, S., Aminabadi, R. Y., Ruwase, O., Yang, S., Zhang, M., Li, D., and He, Y. Zero-offload: Democratizing billion-scale model training. CoRR, abs/2101.06840, 2021. URL https://arxiv.org/abs/2101.06840. Scao, T. L., Fan, A., Akiki, C., Pavlick, E., Ili´c, S., Hesslow, D., Castagné, R., Lucc...
2312.17238#31
2312.17238#33
2312.17238
[ "2302.13971" ]
2312.17238#33
Fast Inference of Mixture-of-Experts Language Models with Offloading
Bloom: A 176b-parameter open-access multilingual language model. arXiv preprint arXiv:2211.05100, 2022. Shahbaba, B. and Neal, R. Nonlinear models using dirichlet process mixtures. Journal of Machine Learning Research, 10(Aug):1829â 1850, 2009. Shazeer, N., Mirhoseini, A., Maziarz, K., Davis, A., Le, Q., Hinton, G., an...
2312.17238#32
2312.17238#34
2312.17238
[ "2302.13971" ]
2312.17238#34
Fast Inference of Mixture-of-Experts Language Models with Offloading
High-throughput generative inference of large language models with a single gpu. In International Conference on Machine Learning, pp. 31094â 31116. PMLR, 2023. Steam. Steam hardware & software survey: October 2023, accessed on 2023.11.02, 2023. URL https://store.steampowered.com/hwsurvey/videocard/. Team, G., Anil, R.,...
2312.17238#33
2312.17238#35
2312.17238
[ "2302.13971" ]
2312.17238#35
Fast Inference of Mixture-of-Experts Language Models with Offloading
W., Lu, H., Sifre, L., Maggioni, M., Alcober, F., Garrette, D., Barnes, M., Thakoor, S., Austin, J., Barth-Maron, G., Wong, W., Joshi, R., Chaabouni, R., Fatiha, D., Ahuja, A., Liu, R., Li, Y., Cogan, S., Chen, J., Jia, C., Gu, C., Zhang, Q., Grimstad, J., Hartman, A. J., Chadwick, M., Tomar, G. S., Garcia, X., Senter,...
2312.17238#34
2312.17238#36
2312.17238
[ "2302.13971" ]
2312.17238#36
Fast Inference of Mixture-of-Experts Language Models with Offloading
R., Vasudevan, V., Agrawal, S., Riesa, J., Lepikhin, D., Tanburn, R., Srinivasan, S., Lim, H., Hodkinson, S., Shyam, P., Ferret, J., Hand, S., Garg, A., Paine, T. L., Li, J., Li, Y., Giang, M., Neitz, A., Abbas, Z., York, S., Reid, M., Cole, E., Chowdhery, A., Das, D., Rogozi´nska, D., Nikolaev, V., Sprechmann, P., Na...
2312.17238#35
2312.17238#37
2312.17238
[ "2302.13971" ]
2312.17238#37
Fast Inference of Mixture-of-Experts Language Models with Offloading
10 D., Sachan, D., Amplayo, R. K., Swanson, C., Petrova, D., Narayan, S., Guez, A., Brahma, S., Landon, J., Patel, M., Zhao, R., Villela, K., Wang, L., Jia, W., Rahtz, M., Giménez, M., Yeung, L., Lin, H., Keeling, J., Georgiev, P., Mincu, D., Wu, B., Haykal, S., Saputro, R., Vodrahalli, K., Qin, J., Cankara, Z., Sharm...
2312.17238#36
2312.17238#38
2312.17238
[ "2302.13971" ]
2312.17238#38
Fast Inference of Mixture-of-Experts Language Models with Offloading
K., Agrawal, S., Mensch, A., Soparkar, K., Lenc, K., Chung, T., Pope, A., Maggiore, L., Kay, J., Jhakra, P., Wang, S., Maynez, J., Phuong, M., Tobin, T., Tacchetti, A., Trebacz, M., Robinson, K., Katariya, Y., Riedel, S., Bailey, P., Xiao, K., Ghelani, N., Aroyo, L., Slone, A., Houlsby, N., Xiong, X., Yang, Z., Gribovs...
2312.17238#37
2312.17238#39
2312.17238
[ "2302.13971" ]
2312.17238#39
Fast Inference of Mixture-of-Experts Language Models with Offloading
L., Cevey, S., Gleicher, Z., Avrahami, T., Boral, A., Srinivasan, H., Selo, V., May, R., Aisopos, K., Hussenot, L., Soares, L. B., Baumli, K., Chang, M. B., Recasens, A., Caine, B., Pritzel, A., Pavetic, F., Pardo, F., Gergely, A., Frye, J., Ramasesh, V., Horgan, D., Badola, K., Kassner, N., Roy, S., Dyer, E., Campos, ...
2312.17238#38
2312.17238#40
2312.17238
[ "2302.13971" ]
2312.17238#40
Fast Inference of Mixture-of-Experts Language Models with Offloading
H., Ives, R., Hasson, Y., Li, Y., Noland, E., Cao, Y., Byrd, N., Hou, L., Wang, Q., Sottiaux, T., Paganini, M., Lespiau, J.-B., Moufarek, A., Hassan, S., Shivakumar, K., van Amersfoort, J., Mandhane, A., Joshi, P., Goyal, A., Tung, M., Brock, A., Sheahan, H., Misra, V., Li, C., Raki´cevi´c, N., Dehghani, M., Liu, F.,...
2312.17238#39
2312.17238#41
2312.17238
[ "2302.13971" ]
2312.17238#41
Fast Inference of Mixture-of-Experts Language Models with Offloading
S., Mohiuddin, A., Muhammad, F., Miao, J., Lee, A., Vieillard, N., Potluri, S., Park, J., Davoodi, E., Zhang, J., Stanway, J., Garmon, D., Karmarkar, A., Dong, Z., Lee, J., Kumar, A., Zhou, L., Evens, J., Isaac, W., Chen, Z., Jia, J., Levskaya, A., Zhu, Z., Gorgolewski, C., Grabowski, P., Mao, Y., Magni, A., Yao, K., S...
2312.17238#40
2312.17238#42
2312.17238
[ "2302.13971" ]
2312.17238#42
Fast Inference of Mixture-of-Experts Language Models with Offloading
D., Kurylowicz, P., Velury, S., Krause, S., Hardin, C., Dixon, L., Janzer, L., Choo, K., Feng, Z., Zhang, B., Singhal, A., Latkar, T., Zhang, M., Le, Q., Abellan, E. A., Du, D., McKinnon, D., Antropova, N., Bolukbasi, T., Keller, O., Reid, D., Finchelstein, D., Raad, M. A., Crocker, R., Hawkins, P., Dadashi, R., Gaffne...
2312.17238#41
2312.17238#43
2312.17238
[ "2302.13971" ]
2312.17238#43
Fast Inference of Mixture-of-Experts Language Models with Offloading
J., Polozov, A., Kushman, N., Krakovna, V., Brown, S., Bateni, M., Duan, D., Firoiu, V., Thotakuri, M., Natan, T., Mohananey, A., Geist, M., Mudgal, S., Girgin, S., Li, H., Ye, J., Roval, O., Tojo, R., Kwong, M., Lee-Thorp, J., Yew, C., Yuan, Q., Bagri, S., Sinopalnikov, D., Ramos, S., Mellor, J., Sharma, A., Severyn, ...
2312.17238#42
2312.17238#44
2312.17238
[ "2302.13971" ]
2312.17238#44
Fast Inference of Mixture-of-Experts Language Models with Offloading
H., Sanmiya, J., Gladchenko, E., Trdin, N., Sozanschi, A., Toyama, D., Rosen, E., Tavakkol, S., Xue, L., Elkind, C., Woodman, O., Carpenter, J., Papamakarios, G., Kemp, R., Kafle, S., Grunina, T., Sinha, R., Talbert, A., Goyal, A., Wu, D., Owusu-Afriyie, D., Du, C., Thornton, C., Pont-Tuset, J., Narayana, P., Li, J., F...
2312.17238#43
2312.17238#45
2312.17238
[ "2302.13971" ]
2312.17238#45
Fast Inference of Mixture-of-Experts Language Models with Offloading
A., Li, Y., Lu, T., Ittycheriah, A., Shroff, P., Sun, P., Varadarajan, M., Bahargam, S., Willoughby, R., Gaddy, D., Dasgupta, I., Desjardins, G., Cornero, M., Robenek, B., Mittal, B., Albrecht, B., Shenoy, A., Moiseev, F., Jacobsson, H., Ghaffarkhah, A., Rivière, M., Walton, A., Crepy, C., Parrish, A., Liu, Y., Zhou, ...
2312.17238#44
2312.17238#46
2312.17238
[ "2302.13971" ]
2312.17238#46
Fast Inference of Mixture-of-Experts Language Models with Offloading
R., Aslanides, J., Vyas, N., Wicke, M., Ma, X., Bilal, T., Eltyshev, E., Balle, D., Martin, N., Cate, H., Manyika, J., Amiri, K., Kim, Y., Xiong, X., Kang, K., Luisier, F., Tripuraneni, N., Madras, D., Guo, M., Waters, A., Wang, O., Ainslie, J., Baldridge, J., Zhang, H., Pruthi, G., Bauer, J., Yang, F., Mansour, R., Ge...
2312.17238#45
2312.17238#47
2312.17238
[ "2302.13971" ]
2312.17238#47
Fast Inference of Mixture-of-Experts Language Models with Offloading
Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023. Zhang, S., Roller, S., Goyal, N., Artetxe, M., Chen, M., Chen, S., Dewan, C., Diab, M., Li,...
2312.17238#46
2312.17238#48
2312.17238
[ "2302.13971" ]
2312.17238#48
Fast Inference of Mixture-of-Experts Language Models with Offloading
12
2312.17238#47
2312.17238
[ "2302.13971" ]
2312.11111#0
The Good, The Bad, and Why: Unveiling Emotions in Generative AI
3 2 0 2 c e D 9 1 ] I A . s c [ 2 v 1 1 1 1 1 . 2 1 3 2 : v i X r a # The Good, The Bad, and Why: Unveiling Emotions in Generative AI* Cheng Li1,2, Jindong Wang1â , Yixuan Zhang3, Kaijie Zhu1, Xinyi Wang4, Wenxin Hou1, Jianxun Lian1, Fang Luo4, Qiang Yang5, Xing Xie1 1Microsoft Research 2Institute of Software, CAS 3Wil...
2312.11111#1
2312.11111
[ "2210.09261" ]
2312.11111#1
The Good, The Bad, and Why: Unveiling Emotions in Generative AI
Emotion significantly impacts our daily behaviors and interactions. While recent genera- tive AI models, such as large language models, have shown impressive performance in various tasks, it remains unclear whether they truly comprehend emotions. This paper aims to address this gap by incorporating psychological theori...
2312.11111#0
2312.11111#2
2312.11111
[ "2210.09261" ]
2312.11111#2
The Good, The Bad, and Why: Unveiling Emotions in Generative AI
1 # Introduction Emotion is a multifaceted psychological and physiological phenomenon that encompasses sub- jective feelings, physiological responses, and behavioral expressions 23. Emotions manifest through a confluence of reflexes, perception, cognition, and behavior, all of which are subject to modulation by a range...
2312.11111#1
2312.11111#3
2312.11111
[ "2210.09261" ]
2312.11111#3
The Good, The Bad, and Why: Unveiling Emotions in Generative AI
â Corresponding author: Jindong Wang. Email: jindong.wang@microsoft.com. Address: No.5 Danling Street, Haidian District, Beijing, China, 100080. 1 (a) EmotionPrompt and EmotionAttack impact the performance of AI models Original 1. Sum the two given numbers prompt 2- Determine whether a movie review is positive or negat...
2312.11111#2
2312.11111#4
2312.11111
[ "2210.09261" ]
2312.11111#4
The Good, The Bad, and Why: Unveiling Emotions in Generative AI
inside AI models Figure 1: An overview of our research on unveiling emotions in generative AI models. (a) We proposed EmotionPrompt and EmotionAttack to increase and impair AI model performance, re- spectively. (b) We designed EmotionDecode to explain how emotional prompts work in AI models. and open-ended generation 7...
2312.11111#3
2312.11111#5
2312.11111
[ "2210.09261" ]
2312.11111#5
The Good, The Bad, and Why: Unveiling Emotions in Generative AI
2 tors that demonstrate how emotions can impede human problem-solving, such as negative life events 13 and emotional arousal 39;12. Moreover, we introduced EmotionDecode to illuminate the effectiveness of emotional stimuli in AI models. As depicted in Fig. 1(b), EmotionDecode unravels the knowledge representation in AI...
2312.11111#4
2312.11111#6
2312.11111
[ "2210.09261" ]
2312.11111#6
The Good, The Bad, and Why: Unveiling Emotions in Generative AI
Each type contains 5 dif- ferent images that append the original textual prompts in multi-modal models. Note that all visual prompts have their mirror in the textual prompts, but not vice versa. This is due to the fact that some high-level texts cannot be visualized. We conducted extensive experiments using both open-s...
2312.11111#5
2312.11111#7
2312.11111
[ "2210.09261" ]
2312.11111#7
The Good, The Bad, and Why: Unveiling Emotions in Generative AI
To conclude, this paper makes the following contributions: 1. Theory-driven Method in Understanding the Emotional aspect of LLMs: We present EmotionPrompt and EmotionAttack grounded in psychological theories to comprehen- sively assess the emotions of AI models. Our study demonstrates that AI models can understand and ...
2312.11111#6
2312.11111#8
2312.11111
[ "2210.09261" ]
2312.11111#8
The Good, The Bad, and Why: Unveiling Emotions in Generative AI
3 (a) Performance change by EmotionPrompt (>0) and EmotionAttack (<0) with human study. Semantic understanding Semantic understanding Logical reasoning Logical reasoning Generation (Text) (Image) (Text) (Image) (Human study, GPT-4) S 60 S 19 . 2 1 ? 20 t { t + a ¢ 0 4 A . $ 4 = 0-â * ; ¢ t 4 2 -20 } $ 2 40 ' : 1 I ¢...
2312.11111#7
2312.11111#9
2312.11111
[ "2210.09261" ]
2312.11111#9
The Good, The Bad, and Why: Unveiling Emotions in Generative AI
EmotionDecode (EmotionPrompt) EmotionDecode (EmotionAttack) EmotionDecode (Neutral stimuli) 10 sa £09 OT .09 .08 .09 .09 |.08 .08 .08 .09 .09 .09: sa oad 08 .08 .09 .10 .10 A oa 209.08 .08 .08 .08 .08 |.07 .08 .08 .09 .09 .10/ Los. os ss la la 106 206 +00 109.08 .08 .09 .03 .08 .08 .08 .08 .08 .09 .09- sum Llama-2 sum...
2312.11111#8
2312.11111#10
2312.11111
[ "2210.09261" ]
2312.11111#10
The Good, The Bad, and Why: Unveiling Emotions in Generative AI
Figure 2: (a) The main results of textual and visual EmotionPrompt and EmotionAttack on gener- ative AI models. (b) Results of EmotionDecode. The color represents the performance of stimulus on diverse tasks across Llama-2 and GPT-4. Red means better performance, while blue means weaker performance. 4 # 2 Results # 2.1...
2312.11111#9
2312.11111#11
2312.11111
[ "2210.09261" ]
2312.11111#11
The Good, The Bad, and Why: Unveiling Emotions in Generative AI
Generative AI models understand and can be influenced by emotional stimuli. Emo- tionPrompt and EmotionAttack demonstrate consistent effectiveness in semantic under- standing and reasoning tasks. As shown in Fig. 2(a), the textual and visual Emotion- Prompt improve the semantic understanding performance by 13.88% and 1...
2312.11111#10
2312.11111#12
2312.11111
[ "2210.09261" ]
2312.11111#12
The Good, The Bad, and Why: Unveiling Emotions in Generative AI
2Under this metric, a score of 100 corresponds to human experts, and 0 corresponds to random guessing. Note that a model can achieve a score less than 0 if it performs worse than random guessing on a multiple-choice task. 5 Meanwhile, image prompts are more effective in impairing performance than textual prompts, indic...
2312.11111#11
2312.11111#13
2312.11111
[ "2210.09261" ]