mishig HF Staff commited on
Commit
3cd8e19
·
verified ·
1 Parent(s): 4b8d386

Add 1 files

Browse files
Files changed (1) hide show
  1. 2506/2506.12189.md +350 -0
2506/2506.12189.md ADDED
@@ -0,0 +1,350 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Title: Supernova Event Dataset: Interpreting Large Language Models’ Personality through Critical Event Analysis
2
+
3
+ URL Source: https://arxiv.org/html/2506.12189
4
+
5
+ Markdown Content:
6
+ ###### Abstract
7
+
8
+ Large Language Models (LLMs) are increasingly integrated into everyday applications. As their influence grows, understanding their decision-making and underlying personality becomes essential. In this work, we interpret model personality using our proposed Supernova Event Dataset, a novel dataset with diverse articles spanning biographies, historical events, news, and scientific discoveries. We use this dataset to benchmark LLMs on extracting and ranking key events from text, a subjective and complex challenge that requires reasoning over long-range context and modeling causal chains. We evaluate small models like Phi-4, Orca 2, and Qwen 2.5, and large, stronger models such as Claude 3.7, Gemini 2.5, and Open AI o3, and propose a framework where another LLM acts as a judge to infer each model’s personality based on its selection and classification of events. Our analysis shows distinct personality traits: for instance, Orca 2 demonstrates emotional reasoning focusing on interpersonal dynamics, while Qwen 2.5 displays a more strategic, analytical style. When analyzing scientific discovery events, Claude Sonnet 3.7 emphasizes conceptual framing, Gemini 2.5 Pro prioritizes empirical validation, and o3 favors step-by-step causal reasoning. This analysis improves model interpretability, making them user-friendly for a wide range of diverse applications.
9
+
10
+ Machine Learning, ICML
11
+
12
+ 1 Introduction
13
+ --------------
14
+
15
+ ![Image 1: Refer to caption](https://arxiv.org/html/2506.12189v2/x1.png)
16
+
17
+ Figure 1: Overview of our LLM personality analysis framework. The framework utilizes our Supernova event dataset, a collection of Wikipedia biographies, major news, historical events, and scientific discoveries from Google Deep Research. The target LLM receives an article from this corpus (via RAG) along with a prompt, then samples and ranks the five most critical events in order of importance. A judge LLM analyzes these selections and rankings to determine the target LLM’s personality, revealing its human values and decision-making patterns. (We use “personality” to describe consistent behavioral patterns, not to imply consciousness or emotion)
18
+
19
+ As Large Language Models (LLMs) become integrated in high-stakes domains such as healthcare, law, finance, and education, evaluating their capabilities beyond factual accuracy becomes crucial. Most current LLM benchmarks focus on objective tasks with verifiable ground truths(Ivanov & Penchev, [2024](https://arxiv.org/html/2506.12189v2#bib.bib15)), but these are becoming insufficient to capture subjective judgment, interpretation, and alignment with human values(Aoyagui et al., [2025](https://arxiv.org/html/2506.12189v2#bib.bib1)).
20
+
21
+ A subjective task such as identifying and ranking critical events in a detailed timeline of an event (for example, a biography, historical and news events, or scientific discovery) is challenging(Stranisci et al., [2022](https://arxiv.org/html/2506.12189v2#bib.bib36); Plum et al., [2019](https://arxiv.org/html/2506.12189v2#bib.bib33)). Selecting the most important critical event requires reasoning across different levels of abstraction and understanding long-term dependencies(Kourani et al., [2022](https://arxiv.org/html/2506.12189v2#bib.bib19)). This is challenging due to human memory limitations and the non-linear interactions between events, which involve subtle causal relationships(Gianicolo et al., [2020](https://arxiv.org/html/2506.12189v2#bib.bib10)). For the same event, humans can sample different critical events based on their values and experiences, often leading to disagreement.
22
+
23
+ This task resembles the concept of salience detection in cognitive science, where information stands out due to both intrinsic properties and relevance to perceived goals(Liu et al., [2018](https://arxiv.org/html/2506.12189v2#bib.bib27)). In natural language processing (NLP) research, event salience has been explored through filtering non-salient events, leveraging contextual information, and examining how an event’s removal affects narrative coherence(Zhang et al., [2021](https://arxiv.org/html/2506.12189v2#bib.bib41); Otake et al., [2020](https://arxiv.org/html/2506.12189v2#bib.bib32)).
24
+
25
+ For LLMs, identifying the most critical events is especially challenging, as they lack the inherent understanding of events that humans possess(Bélisle-Pipon, [2024](https://arxiv.org/html/2506.12189v2#bib.bib2); Ding & Wang, [2024](https://arxiv.org/html/2506.12189v2#bib.bib8)). The ranking of these sampled events adds another layer of difficulty, as it requires complex reasoning, understanding causal relationships that are often not explicitly stated, and interpreting many subtle factors(Su et al., [2025](https://arxiv.org/html/2506.12189v2#bib.bib37); Cai et al., [2025](https://arxiv.org/html/2506.12189v2#bib.bib5); Li et al., [2025](https://arxiv.org/html/2506.12189v2#bib.bib23)). Each LLM exhibits a unique reasoning style shaped by its pre- and post-training strategies(Besta et al., [2025](https://arxiv.org/html/2506.12189v2#bib.bib3); Kumar et al., [2025](https://arxiv.org/html/2506.12189v2#bib.bib21)). Our dataset and task formulation make it possible to uncover human values and, in turn, personality traits reflected in each model’s analysis of critical events.
26
+
27
+ Although previous work has shown that LLMs can simulate personality traits when explicitly prompted(Sorokovikova et al., [2024](https://arxiv.org/html/2506.12189v2#bib.bib35)), our work demonstrates that even without role-playing prompts, LLMs exhibit consistent style in their decision-making that reveals unique personality traits when handling complex subjective tasks such as critical event analysis.
28
+
29
+ In this work, we introduce the Supernova Event Dataset, a collection of primarily Wikipedia articles covering biographies, news, historical events, and scientific discoveries. These categories were chosen for their clear timelines and multiple key events. Using this dataset, we define a task that involves sampling and ranking critical events from the given article. Another LLM is used as a judge to assess the selected and ranked events to evaluate the personality of various Large Reasoning Models (LRMs) and Large Language Models (LLMs). We also include an ablation study (Sec:[7](https://arxiv.org/html/2506.12189v2#S7 "7 Ablation ‣ Supernova Event Dataset: Interpreting Large Language Models’ Personality through Critical Event Analysis")) using a movie script dataset to examine the behavior of the model in a different domain.
30
+
31
+ The main contributions of this work are:
32
+
33
+ * •
34
+ Supernova Event Dataset: We introduce a new dataset consisting of Wikipedia articles on diverse topics, including biographies, news, historical events, and scientific discoveries from Google Deep Research. In addition to enabling personality interpretation, this dataset can help future research evaluate how well models handle long context, multi-dimensional causal chains, and counterfactual reasoning.
35
+
36
+ * •
37
+ New Task for Personality Benchmarking: A novel task of critical event sampling and ranking is introduced to benchmark the personality traits of LLMs. This task is prompt-agnostic, which makes it effective in revealing consistent model behavior.
38
+
39
+ * •
40
+ Personality Evaluation Framework: We propose a framework to evaluate the personality of a model by using an additional LLM as a judge. The judge assesses the sampled and ranked critical events produced by the candidate model, helping to avoid human biases and cognitive overload.
41
+
42
+ We acknowledge that using “personality” to describe LLM behavior is metaphorical as LLMs lack consciousness and emotions. However, following recent work(He & Liu, [2025](https://arxiv.org/html/2506.12189v2#bib.bib13); Wang et al., [2025](https://arxiv.org/html/2506.12189v2#bib.bib39)), we find personality a useful framework for characterizing consistent behavioral patterns.
43
+
44
+ Our work is the first to extract the precise personality of LLMs in a more realistic setup. Our analysis shows that LLM decision-making can be made more interpretable by designing better evaluation methods and tasks, which is important for their safe deployment.
45
+
46
+ 2 Related Work
47
+ --------------
48
+
49
+ Table 1: Comparison of models’ ranking of critical events in Subrahmanyan Chandrasekhar’s life
50
+
51
+ Model perspective: Models show different priorities in ranking significant events, varying in temporal ordering and professional vs. personal achievement emphasis.
52
+
53
+ LLMs such as GPT-3.5 and GPT-4 can simulate personality traits when prompted to assume specific roles, and these traits are often recognizable to human evaluators based on the generated content(Jiang et al., [2023](https://arxiv.org/html/2506.12189v2#bib.bib16)). This ability has been shown in tasks such as answering questionnaires(Wang et al., [2025](https://arxiv.org/html/2506.12189v2#bib.bib39)). Similarly, LLMs tend to follow certain moral competence(Khamassi et al., [2024](https://arxiv.org/html/2506.12189v2#bib.bib18)) to avoid harmful content and reflect the social biases present in the training data(Deng et al., [2024](https://arxiv.org/html/2506.12189v2#bib.bib7)). However, our work shows that even without role-playing prompts, LLMs tend to follow consistent reasoning patterns and reflect certain human values, revealing a unique personality of their own when handling complex, subjective tasks like critical event analysis.
54
+
55
+ Heston & Gillette ([2025](https://arxiv.org/html/2506.12189v2#bib.bib14)) and Bodroža et al. ([2024](https://arxiv.org/html/2506.12189v2#bib.bib4)) adapted psychological tools such as the Big Five Inventory and Schwartz’s values to measure behavioral patterns or “traits” in LLM. Their results show that different LLMs can exhibit distinct profiles. Recent work has explored LLM personality through various lenses. He & Liu ([2025](https://arxiv.org/html/2506.12189v2#bib.bib13)) showed LLMs can adopt Big Five personality traits when prompted, while Wang et al. ([2025](https://arxiv.org/html/2506.12189v2#bib.bib39)) demonstrated these traits affect decision-making patterns. However, these approaches typically use explicit personality framing or psychometric tests. Our work examines whether consistent personality-like patterns emerge naturally in complex tasks without such prompting.
56
+
57
+ Event extraction is important to improve the accuracy and reliability of LLMs when handling complex, long-form texts. Recent benchmarks on long-context reasoning(Ling et al., [2025](https://arxiv.org/html/2506.12189v2#bib.bib25); Kuratov et al., [2024](https://arxiv.org/html/2506.12189v2#bib.bib22)) show that models often struggle with tasks that require integrating information across extended passages, highlighting the challenging nature of our task setup.
58
+
59
+ Recent work has explored the application of LLMs for the extraction of events from long-form text. Liu & Luo ([2024](https://arxiv.org/html/2506.12189v2#bib.bib26)) introduced Definition-driven Document-level Event Extraction (DDEE), which enhances prompt design and incorporates structured heuristics. This approach addresses challenges such as prompt sensitivity and the long-tail distribution of event types.
60
+
61
+ Zhang et al. ([2024](https://arxiv.org/html/2506.12189v2#bib.bib42)) proposed ULTRA, a hierarchical framework that efficiently extracts event arguments from entire documents. ULTRA mitigates positional bias by processing text in chunks. Gao et al. ([2024](https://arxiv.org/html/2506.12189v2#bib.bib9)) proposes EventRL, which applies reinforcement learning to train LLMs for better event extraction. By introducing specific reward functions, EventRL enhances the model’s adherence to instructions and reduces hallucinations, particularly in handling novel event types.
62
+
63
+ Although most existing work focuses on improving the accuracy of event extraction from long-form text, our work introduces a new, subjective task: identifying and ranking the most critical event. This task shifts the focus from extraction accuracy to how LLMs prioritize events based on their importance, offering a deeper insight into the personality traits and human values reflected by each model.
64
+
65
+ 3 Supernova Event Dataset
66
+ -------------------------
67
+
68
+ The Supernova Event dataset includes Wikipedia articles on biographies, historical and major news events, and scientific discoveries created using Gemini 2.5 Pro Deep Research. Articles are chosen based on criteria such as word count or the frequency of edits, prioritizing those with the highest number of edits as an indicator of importance (Table[2](https://arxiv.org/html/2506.12189v2#S3.T2 "Table 2 ‣ 3.3 Scientific Discovery Dataset Collection ‣ 3 Supernova Event Dataset ‣ Supernova Event Dataset: Interpreting Large Language Models’ Personality through Critical Event Analysis")). We collect only the text content, excluding all other modalities.
69
+
70
+ ### 3.1 Biography Dataset Collection
71
+
72
+ Our biographies collection pipeline targets entries with standardized infobox templates, including person, scientist, writer, actor, politician, and sports personnel. We employ strict content filtering as followed in the work by Shao et al. ([2024](https://arxiv.org/html/2506.12189v2#bib.bib34)), which requires a minimum of 3,000 words to ensure a comprehensive coverage of the person’s life and achievements. The crawler’s template-based categorization approach eliminates disambiguation pages and other nonbiographical content, while our page view threshold (>>>50,000 views) ensures we capture only notable figures with significant public interest. Each biography is parsed using mwparserfromhell to remove the wiki markup and extract clean, readable text before being saved.
73
+
74
+ ### 3.2 Historical and World News Events Dataset Collection
75
+
76
+ We compile two additional categories consisting of _Major World News Events (2000 – present)_ and _Global Historical Turning-Points (1000 BCE – 2000 CE)_ from Wikipedia. Each crawler walks a curated set of high-level categories (e.g., _21st-century conflicts_, _Battles_, _Disasters_) to a depth of 1, then filters candidates using: (i) basic heuristics that reject year-only, list, disambiguation, and slogan pages, (ii) filters on article length (word count ≥\geq≥ 500), ORES quality class (≥\geq≥ B), and cumulative page-views (≥\geq≥ 5000), and (iii) explicit year extraction confined to the temporal window.
77
+
78
+ Articles that pass these filters are further processed for semantic validation, where a local LLM (LLaMa-3-8B served through ollama) with a 1500 token context is queried to ensure if the page is _primarily about a discrete major event_. A given article passes if the LLM classifies it as an event with confidence greater than 0.9. The seed articles are automatically accepted. Finally, human verification is performed, and 200 articles are curated for each category.
79
+
80
+ ### 3.3 Scientific Discovery Dataset Collection
81
+
82
+ The endpoints _Physics_, _Chemistry_, and _Physiology or Medicine_ (category codes phy, che, and med) are queried using the Nobel Prize REST API v2.1. For each prize record, the award year, the English category name, the English motivation text, and the list of laureate names are extracted. The query results in 384 prizes between 1901 – 2024, organized in a JSON object with a four‐field schema (year, category, discovery, laureates).
83
+
84
+ We prompt Google Gemini 2.5 Pro with Deep Research to transform Nobel entries into appropriate narrative instances for long-context reasoning. Each prompt contains the raw Nobel metadata, and Gemini returns a fully formed encyclopedic article that covers historical context, methodology, publication trail, significance, and legacy. We do not apply any post-processing and verify for hallucination before saving our articles as text files. We generate 25 expanded scientific discovery articles.
85
+
86
+ Table 2: The Supernova Event dataset
87
+
88
+ 4 Methodology
89
+ -------------
90
+
91
+ To systematically identify and rank critical events within a given document, we use Retrieval-Augmented Generation (RAG). The corpus (comprising Wikipedia articles) is processed using a chunking mechanism that segments lengthy documents into smaller, semantically coherent units (1000 tokens with 100 tokens overlap). Each segment is then transformed into a high-dimensional vector representation using the nomic-embed-text-v1 embedding model(Nussbaum et al., [2024](https://arxiv.org/html/2506.12189v2#bib.bib30)), creating a searchable semantic space.
92
+
93
+ The embedded document chunks are indexed in a FAISS vector database(Johnson et al., [2019](https://arxiv.org/html/2506.12189v2#bib.bib17)), which facilitates efficient search across the entire corpus. This approach provides a significant computational advantage over traditional search methods when working with large-scale document collections. For each document, a context-aware retrieval system filters and ranks the most relevant text chunks based on semantic similarity to the query. Our approach utilizes a two-stage prompting strategy, where the first prompt is specifically designed to enhance the retrieval capabilities of the system as elaborated in Section[A.1](https://arxiv.org/html/2506.12189v2#A1.SS1 "A.1 Prompts ‣ Appendix A Appendix ‣ Supernova Event Dataset: Interpreting Large Language Models’ Personality through Critical Event Analysis") (Box 1).
94
+
95
+ The initial prompt enables the MultiQueryRetriever to reformulate the original query into multiple search queries, thereby increasing the likelihood of retrieving semantically relevant text chunks containing critical event information. By explicitly instructing the model to consider factors that define critical events (such as turning points and cascading effects), the retriever is optimized to locate passages containing significant milestones rather than topic-related content.
96
+
97
+ Once the relevant document chunks are retrieved, a second prompt guides the large language model through a structured analytical process, further elaborated in Section[A.1](https://arxiv.org/html/2506.12189v2#A1.SS1 "A.1 Prompts ‣ Appendix A Appendix ‣ Supernova Event Dataset: Interpreting Large Language Models’ Personality through Critical Event Analysis") (Box 2).
98
+
99
+ For the scientific discovery corpus, each article is used for analyzing the strong LRMs, o3, Gemini 2.5 Pro, and Claude Sonnet 3.7 Thinking. Each model is prompted to identify and rank the five turning points that most decisively altered the trajectory toward the given discovery or its recognition, using explicit counterfactual tests (“Would the narrative have unfolded differently?”) as selection criteria. The model returns an ordered list with summaries of one sentence and concludes with a reflective label that represents the guiding principle behind its classification. We also keep track of the chains of reasoning that human experts can examine in later evaluation stages.
100
+
101
+ Table 3: Comparison of models’ ranking of critical events in the 2008 Financial Crisis
102
+
103
+ Model approaches: Phi4 and Orca2 both rank the Lehman Brothers collapse as most critical, focusing on immediate catalysts, while Qwen2.5 emphasizes underlying causes by ranking the Subprime Mortgage Crisis first. Phi4 and Orca2 prioritize specific institutional failures, while Qwen2.5 takes a more systemic, macro-level approach to the crisis narrative.
104
+
105
+ 5 Evaluation
106
+ ------------
107
+
108
+ To evaluate the sampled critical point and its ranking thereafter benchmarking the personality of different language models, we use an external LLM as a judge. We use Qwen-2.5 (14B) as a judge. The evaluation compared three alternative models: Phi-4 (LRM), Orca-2 (LRM), and Qwen-2.5 (LLM).
109
+
110
+ ### 5.1 Benchmarking Model Personality
111
+
112
+ To evaluate or label the personality of an LLM in solving the given task, we use a meta-analysis technique where one LLM (specifically qwen 2.5:14b as the analysis model) evaluates the personality of other LLMs (phi4, orca2:13b, and qwen 2.5:14b) based on their outputs (see Fig.[1](https://arxiv.org/html/2506.12189v2#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Supernova Event Dataset: Interpreting Large Language Models’ Personality through Critical Event Analysis")). Our indirect approach to personality assessment is motivated by recent findings that LLMs’ self-explanations often misrepresent their actual reasoning processes(Lindsey et al., [2025](https://arxiv.org/html/2506.12189v2#bib.bib24)). Rather than asking models to self-report personality traits, we observe their behavior in complex tasks.
113
+
114
+ Each target model analyzes the given text and identifies and ranks critical events. The analysis model is then prompted to examine the target model’s output using a structured evaluation framework. The analysis model synthesizes the output into a concise “personality type”. This use of an external LLM for analysing avoids the requirement of heuristics, which are hard to generalize, providing a scalable method.
115
+
116
+ For using this external model as a judge to evaluate the other models, we use the prompt as specified in Section[A.1](https://arxiv.org/html/2506.12189v2#A1.SS1 "A.1 Prompts ‣ Appendix A Appendix ‣ Supernova Event Dataset: Interpreting Large Language Models’ Personality through Critical Event Analysis") (Box 3) along with the full response of the target language model being evaluated as input for the LLM judge (Fig.[1](https://arxiv.org/html/2506.12189v2#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Supernova Event Dataset: Interpreting Large Language Models’ Personality through Critical Event Analysis")). While this provides consistency and scalability, it introduces potential biases and lacks human validation. Future work should incorporate human evaluation and explore multi-judge agreement to validate these findings. Additionally, our personality categories are empirically derived rather than grounded in established psychological frameworks. We view this as exploratory research that opens new avenues for understanding LLM behavior.
117
+
118
+ ### 5.2 Personality Trait Identification
119
+
120
+ To identify personality traits from LLM responses, we use a sentence transformer model to generate semantic embeddings of the traits, enabling both similarity measurement and dimensionality reduction.
121
+
122
+ Specifically, we employ the all-MiniLM-L6-v2 model from the sentence-transformers library to encode each identified personality trait into a dense vector representation. For each LLM, we compute an aggregate embedding by combining the embeddings of its associated traits, weighted by their frequency. Cosine similarity between these aggregate embeddings allows us to quantify personality similarity across models. Finally, we apply Principal Component Analysis (PCA) to reduce the embedding space to two dimensions for visualization purposes.
123
+
124
+ 6 Results
125
+ ---------
126
+
127
+ Table 4: Comparison of models’ ranking of foundational discoveries enabling machine learning with ANNs
128
+
129
+ Model perspectives: o3 uniquely includes the 2024 Nobel Prize while being the only model to rank Hopfield’s work first. Both Gemini and Claude rank the 1986 backpropagation paper as most significant. Claude uniquely includes Rosenblatt’s original perceptron work from the 1950s, giving historical context absent in other models’ rankings.
130
+
131
+ We observe consistent differences in how models approach event selection, which we characterize using descriptive labels.
132
+
133
+ ### 6.1 Personality Category Distribution
134
+
135
+ Figure[3(a)](https://arxiv.org/html/2506.12189v2#S6.F3.sf1 "Figure 3(a) ‣ Figure 3 ‣ 6.3 Scientific Discovery ‣ 6 Results ‣ Supernova Event Dataset: Interpreting Large Language Models’ Personality through Critical Event Analysis") presents each model’s distribution across seven personality categories, revealing distinct profiles. Phi4 stands out for its strong “Strategic Achievers” and “Creative Innovators” traits, with moderate emotional presence and lower scores on Ideological, Observational, and Influencer dimensions. Orca2:13b exhibits the highest “Emotional” score of the three and a modest uptick in “Community Support,” alongside solid strategic ability but relatively muted Innovator and Ideological tendencies. Qwen2.5:14b delivers the most even coverage across all categories: it peaks in “Strategic Achievers,” follows with “Creative Innovators,” and maintains moderate scores in “Community Support,” “Emotional,” and the remaining dimensions.
136
+
137
+ The personality patterns are consistently reflected across different domains. In Subrahmanyan Chandrasekhar’s biography (Table[1](https://arxiv.org/html/2506.12189v2#S2.T1 "Table 1 ‣ 2 Related Work ‣ Supernova Event Dataset: Interpreting Large Language Models’ Personality through Critical Event Analysis")), phi4’s strategic achiever orientation leads with career outcomes like ‘Nobel Prize,’ while orca2 emphasizes foundational discoveries (limit discovery ranked first), and qwen2.5 balances achievements with intellectual contributions (‘Philosophy of Systematization’). Similarly, in the financial crisis analysis (Table[3](https://arxiv.org/html/2506.12189v2#S4.T3 "Table 3 ‣ 4 Methodology ‣ Supernova Event Dataset: Interpreting Large Language Models’ Personality through Critical Event Analysis")), phi4 prioritizes immediate catalysts (‘Lehman Brothers Bankruptcy’), orca2 incorporates community-wide impacts including ‘European debt crisis,’ while qwen2.5’s strategic focus identifies underlying causes (‘Subprime Mortgage Crisis’) alongside global consequences, demonstrating each model’s distinctive values-based decision making as revealed by the personality profiles in Figure[3(a)](https://arxiv.org/html/2506.12189v2#S6.F3.sf1 "Figure 3(a) ‣ Figure 3 ‣ 6.3 Scientific Discovery ‣ 6 Results ‣ Supernova Event Dataset: Interpreting Large Language Models’ Personality through Critical Event Analysis").
138
+
139
+ ### 6.2 Model Semantic Space
140
+
141
+ Figure [3(b)](https://arxiv.org/html/2506.12189v2#S6.F3.sf2 "Figure 3(b) ‣ Figure 3 ‣ 6.3 Scientific Discovery ‣ 6 Results ‣ Supernova Event Dataset: Interpreting Large Language Models’ Personality through Critical Event Analysis") positions the three models in a two-dimensional semantic space based on their personality trait embeddings. The semantic space visualization shows a clear separation between the three models, confirming that they occupy distinct regions in the personality space. Notably, phi4 and qwen2.5.14b appear more distant from each other than either is from orca2.13b, suggesting more contrasting personalities between these two models. The positioning reveals that phi4 and qwen2.5.14b are both categorized as “Strategic Achievers” in the semantic space, while orca2.13b stands apart as “Emotional,” reflecting their fundamentally different personality profiles.
142
+
143
+ ### 6.3 Scientific Discovery
144
+
145
+ ![Image 2: Refer to caption](https://arxiv.org/html/2506.12189v2/x2.png)
146
+
147
+ Figure 2: Comparison of reasoning personality profiles across stronger models (Claude Sonnet 3.7, Gemini 2.5 Pro and o3) for the task of critical event sampling and ranking in scientific discoveries.
148
+
149
+ In the scientific discovery category, we characterize model personalities by analyzing how they identify and rank the key events leading to major discoveries. Given the complexity and extended timelines typical of scientific breakthroughs, we focus our evaluation on strong reasoning models: Claude Sonnet 3.7 (with“thinking” enabled), Google Gemini 2.5 Pro, and OpenAI’s o3. Also, due to the high cost associated with using these models, we restrict this detailed analysis to the scientific discovery domain only.
150
+
151
+ We employ a combination of counting key words and open coding to investigate the operational logic of each model. By examining the occurrence of words in the labels, we can identify broad groups of labels centered on causality (e.g., causality, chain, critical), enablement (e.g., enablement, foundation, breakthrough), and synthesis (e.g. conceptual, integration, paradigm). We then use open coding to converge on our final categories after considering the context of the labels and sampled events. As shown in Table [6](https://arxiv.org/html/2506.12189v2#A1.T6 "Table 6 ‣ A.2 Scientific Discovery Categories ‣ Appendix A Appendix ‣ Supernova Event Dataset: Interpreting Large Language Models’ Personality through Critical Event Analysis"), this analysis allows us to identify three distinct guiding principles that reflect the ‘personality’ of each model, namely causality-centric (reflecting a dominant focus on mechanisms and cause-effect pathways, favoring direct cause-and-effect explanations in its sampling), enablement-centric (highlighting the importance of foundations, barrier removal, validation, and making outcomes possible) and synthesis-centric (emphasizing conceptual integration and paradigm-level connections). We use o3 with the finalized three-way codebook to assign each label to the most appropriate category. Our results shown in Fig. [2](https://arxiv.org/html/2506.12189v2#S6.F2 "Figure 2 ‣ 6.3 Scientific Discovery ‣ 6 Results ‣ Supernova Event Dataset: Interpreting Large Language Models’ Personality through Critical Event Analysis") show that o3 and Gemini 2.5 Pro are predominantly causality-centric and enablement-centric, respectively. Claude 3.7 Sonnet is synthesis-centric, but with clear enablement tendencies.
152
+
153
+ As an example shown in Table[4](https://arxiv.org/html/2506.12189v2#S6.T4 "Table 4 ‣ 6 Results ‣ Supernova Event Dataset: Interpreting Large Language Models’ Personality through Critical Event Analysis"), the causality focus of o3 is reflected in its top picks: Hopfield’s 1982 energy‑based network paper and the 1986 Nature backpropagation breakthrough, leading to the “2024 Nobel Prize announcement” that shows how it links discovery directly to outcome. Gemini 2.5 Pro’s enablement tendency is shown in its emphasis on methodological enablers, backpropagation (1986) first, then Hopfield’s associative‑memory application, the Boltzmann machine, and includes ‘Explicit Use of Physics Principles, highlighting the tools that enable further progress. Claude Sonnet 3.7 shows its balanced personality attributes.
154
+
155
+ With the growing interest in automating scientific discovery(O’Neill et al., [2025](https://arxiv.org/html/2506.12189v2#bib.bib31)), and human-AI collaboration(Gottweis et al., [2025](https://arxiv.org/html/2506.12189v2#bib.bib11)), understanding these reasoning profiles becomes critical. The current trend of using research papers as datasets for hypothesis generation may be limited; instead, analysing the full timeline (though comparatively difficult to collect) using our proposed scientific discovery dataset could help in better modeling of the multi-dimensional causal chains, hence more accurate hypotheses.
156
+
157
+ This analysis not only enables more informed model selection, but also points toward designing better human-AI collaboration workflows. By making LLM patterns more interpretable, models can be tasked with solving different tasks, from providing computational scaffolding for complex problems to complementing human expertise, creativity, and values.
158
+
159
+ Further details, including the sampled events, their rankings, and model-specific personality insights across a range of scientific discoveries, are provided in Section[A](https://arxiv.org/html/2506.12189v2#A1 "Appendix A Appendix ‣ Supernova Event Dataset: Interpreting Large Language Models’ Personality through Critical Event Analysis").
160
+
161
+ ![Image 3: Refer to caption](https://arxiv.org/html/2506.12189v2/x3.png)
162
+
163
+ ((a))Personality category distribution
164
+
165
+ ![Image 4: Refer to caption](https://arxiv.org/html/2506.12189v2/x4.png)
166
+
167
+ ((b))Model semantic space
168
+
169
+ Figure 3: Analysis of LLM personality profiles. (a) Shows the distribution of personality categories for each model, with higher values indicating stronger presence. (b) Positions models in 2D semantic space based on their personality traits.
170
+
171
+ 7 Ablation
172
+ ----------
173
+
174
+ Table 5: Comparison of models’ ranking of critical events in the movie Aladdin
175
+
176
+ Model tendencies: Phi4 focuses on plot-centric events and villain actions, while Orca2 emphasizes character relationships and transformative moments. Qwen2.5 balances character interactions with narrative developments. All models identify different “most critical” events, with only the final confrontation appearing consistently (though at different rankings).
177
+
178
+ ### 7.1 Human-Value Alignment – Movie Script Analysis
179
+
180
+ To examine how different language models prioritize and interpret narrative elements, we utilized the Movie Scripts Dataset from Huggingface, comprising 1,172 movie scripts spanning diverse genres, time periods, and production styles. This dataset provides an ideal testing ground for understanding model-specific values, as humans with different backgrounds and preferences naturally emphasize distinct aspects of the same narrative. By analyzing which events each model identifies as critical within these scripts, we can gain insights into their underlying value systems and interpretative biases.
181
+
182
+ The movie script analysis validates these personality profiles: phi4’s strategic focus is evident in prioritizing business decisions like ‘Mark Zuckerberg’s Decision to Create Facebook’ and transformational events such as ‘Andrew’s Practice Regimen’; orca2’s emotional orientation highlights relationship conflicts like ‘Mark’s falling out with Eduardo Saverin’ and ‘Andrew’s confrontation with Fletcher’; while qwen2.5’s balanced yet achievement-oriented approach emphasizes milestone events like ‘Creation of Facemash’ and ‘The Final Performance at the Lincoln Center’ (Tables[34](https://arxiv.org/html/2506.12189v2#A1.T34 "Table 34 ‣ A.2 Scientific Discovery Categories ‣ Appendix A Appendix ‣ Supernova Event Dataset: Interpreting Large Language Models’ Personality through Critical Event Analysis")–[35](https://arxiv.org/html/2506.12189v2#A1.T35 "Table 35 ‣ A.2 Scientific Discovery Categories ‣ Appendix A Appendix ‣ Supernova Event Dataset: Interpreting Large Language Models’ Personality through Critical Event Analysis")).
183
+
184
+ This value-driven analysis is exemplified in the rankings of Aladdin movie (Table[5](https://arxiv.org/html/2506.12189v2#S7.T5 "Table 5 ‣ 7 Ablation ‣ Supernova Event Dataset: Interpreting Large Language Models’ Personality through Critical Event Analysis")), where phi4 prioritizes plot-centric events like ‘Jafar’s Plot to Become Sultan’ and ‘Jasmine Escaping the Palace,’ orca2 emphasizes relationship moments such as ‘Aladdin meets Jasmine’ and ‘Aladdin’s encounter with Jafar,’ while qwen2.5 balances character interactions (‘Aladdin’s First Encounter with Jasmine’) with narrative developments (‘The Magic Carpet Reveals’).
185
+
186
+ Table[32](https://arxiv.org/html/2506.12189v2#A1.T32 "Table 32 ‣ A.2 Scientific Discovery Categories ‣ Appendix A Appendix ‣ Supernova Event Dataset: Interpreting Large Language Models’ Personality through Critical Event Analysis")–[38](https://arxiv.org/html/2506.12189v2#A1.T38 "Table 38 ‣ A.2 Scientific Discovery Categories ‣ Appendix A Appendix ‣ Supernova Event Dataset: Interpreting Large Language Models’ Personality through Critical Event Analysis") further provides details about the critical events identified by each model across several movies. The above analysis and value signature presented by each model continue to apply to the other movies as well.
187
+
188
+ These results show that, even with the same script, each model views the story through a different “value lens.” Phi-4 focuses on big strategic or paradigm-shifting moments, Orca2 highlights emotional and relational beats, and Qwen2.5 picks out outcome-driven milestones and clear signs of character agency. Their choices reflect the kinds of narrative biases as found in human readers with different interpretive goals.
189
+
190
+ 8 Discussion and Conclusion
191
+ ---------------------------
192
+
193
+ Recent work on subjective evaluation and long-context reasoning has underscored the need for benchmarks beyond factual accuracy and needle-in-a-haystack retrieval to probe deeper cognitive functions in LLMs, such as narrative salience detection, value alignment(Meadows et al., [2024](https://arxiv.org/html/2506.12189v2#bib.bib28)), and personality coherence(Jiang et al., [2023](https://arxiv.org/html/2506.12189v2#bib.bib16)). By framing critical-event ranking as a cognitively motivated salience task, the Supernova Event Dataset complements emerging long-context suites like NoLiMa(Modarressi et al., [2025](https://arxiv.org/html/2506.12189v2#bib.bib29)) and BABILong(Kuratov et al., [2024](https://arxiv.org/html/2506.12189v2#bib.bib22)) while introducing a novel personality-oriented perspective. Our evaluation strategy uses retrieval, structured prompts, and an external LLM as a judge in diverse and extensive scenarios, allowing us to examine model decision making and identify personality patterns in depth.
194
+
195
+ The dataset and the personality-based evaluation offer a novel way to assess deeper reasoning abilities in large language models. By encouraging models to consider local details and the global context, this task supports better understanding, more thoughtful decision-making, and clearer information organization. It pushes LLMs toward more human-like reasoning rather than surface-level analysis, and it also provides information on how well they align with human values. Our findings align with recent works showing LLMs exhibit personality-like patterns(Jiang et al., [2023](https://arxiv.org/html/2506.12189v2#bib.bib16); Bodroža et al., [2024](https://arxiv.org/html/2506.12189v2#bib.bib4)), but extend this by demonstrating that these patterns emerge without explicit personality prompting. This suggests LLMs may have inherent behavioral tendencies shaped by their training, supporting the social determinism view of LLM personality(Yang et al., [2024](https://arxiv.org/html/2506.12189v2#bib.bib40)).
196
+
197
+ In addition to predicting personality traits, future work can use the Supernova Event dataset to evaluate LLMs’ ability to model causal chains, understand dynamic event relationships, and perform multi-step reasoning, especially in distinguishing correlation from causation. This will help improve the transparency of their decision-making.
198
+
199
+ However, several limitations remain. First, while our articles are diverse, they reflect Wikipedia’s editorial biases(Greenstein & Zhu, [2012](https://arxiv.org/html/2506.12189v2#bib.bib12)) and the Western-centric coverage typical of many open corpora(Talat et al., [2022](https://arxiv.org/html/2506.12189v2#bib.bib38)), which in turn may skew the personality labels inferred by the judge model(Krumdick et al., [2025](https://arxiv.org/html/2506.12189v2#bib.bib20)). Second, LLM-as-judge methods are known to exhibit stylistic(Cao, [2024](https://arxiv.org/html/2506.12189v2#bib.bib6)) biases that can affect trait inference. One way to mitigate these challenges is to incorporate human annotations and cross-model committees of LLM judges.
200
+
201
+ While our approach has limitations, particularly the use of LLM-based evaluation without human validation, we believe it opens important avenues for understanding how LLMs approach subjective tasks. The consistent patterns we observe across different domains suggest that LLMs may indeed exhibit stable behavioral tendencies in their decision-making. We invite the community to build upon this work, particularly in developing human validation studies and more rigorous evaluation frameworks. The Supernova Event Dataset and our analysis code are publicly available to facilitate such efforts.
202
+
203
+ Future versions of the Supernova Event dataset could explore gradient representations of personality traits, incorporate different personality frameworks, and examine how traits change across different narrative contexts and decision scenarios. By integrating mechanistic interpretability techniques to analyze how models internally represent and reason about personality-relevant features during critical event selection, we hope these directions catalyze more transparent, value-aware, and causally grounded large-model research that advances our understanding of AI alignment.
204
+
205
+ 9 Ethical Considerations
206
+ ------------------------
207
+
208
+ This work examines the behavioral patterns of large language models (LLMs), which may have implications for their deployment in decision-making contexts. We emphasize that our findings are exploratory and should not be used to make definitive claims about model capabilities. Further validation is needed before using such evaluations for high-stakes decisions.
209
+
210
+ References
211
+ ----------
212
+
213
+ * Aoyagui et al. (2025) Aoyagui, P.A., Stemmler, K., Ferguson, S., Kim, Y.-H., and Kuzminykh, A. A matter of perspective (s): Contrasting human and llm argumentation in subjective decision-making on subtle sexism. _arXiv preprint arXiv:2502.14052_, 2025.
214
+ * Bélisle-Pipon (2024) Bélisle-Pipon, J.-C. Why we need to be careful with llms in medicine. _Frontiers in Medicine_, 11:1495582, 2024.
215
+ * Besta et al. (2025) Besta, M., Barth, J., Schreiber, E., Kubicek, A., Catarino, A., Gerstenberger, R., Nyczyk, P., Iff, P., Li, Y., Houliston, S., et al. Reasoning language models: A blueprint. _arXiv preprint arXiv:2501.11223_, 2025.
216
+ * Bodroža et al. (2024) Bodroža, B., Dinić, B.M., and Bojić, L. Personality testing of large language models: limited temporal stability, but highlighted prosociality. _Royal Society Open Science_, 11(10):240180, 2024.
217
+ * Cai et al. (2025) Cai, R., Yu, S., Zhang, J., Chen, W., Xu, B., and Zhang, K. Dr. eci: Infusing large language models with causal knowledge for decomposed reasoning in event causality identification. In _Proceedings of the 31st International Conference on Computational Linguistics_, pp. 9346–9375, 2025.
218
+ * Cao (2024) Cao, H. Writing Style Matters: An Examination of Bias and Fairness in Information Retrieval Systems. _arXiv e-prints_, art. arXiv:2411.13173, November 2024. doi: 10.48550/arXiv.2411.13173.
219
+ * Deng et al. (2024) Deng, C., Duan, Y., Jin, X., Chang, H., Tian, Y., Liu, H., Zou, H.P., Jin, Y., Xiao, Y., Wang, Y., et al. Deconstructing the ethics of large language models from long-standing issues to new-emerging dilemmas. _arXiv e-prints_, pp. arXiv–2406, 2024.
220
+ * Ding & Wang (2024) Ding, X. and Wang, L. Do language models understand time? _arXiv preprint arXiv:2412.13845_, 2024.
221
+ * Gao et al. (2024) Gao, J., Zhao, H., Wang, W., Yu, C., and Xu, R. Eventrl: Enhancing event extraction with outcome supervision for large language models. _arXiv preprint arXiv:2402.11430_, 2024.
222
+ * Gianicolo et al. (2020) Gianicolo, E.A., Eichler, M., Muensterer, O., Strauch, K., and Blettner, M. Methods for evaluating causality in observational studies. _Deutsches Arzteblatt International_, 116(7):101–107, 2020.
223
+ * Gottweis et al. (2025) Gottweis, J., Weng, W.-H., Daryin, A., Tu, T., Palepu, A., Sirkovic, P., Myaskovsky, A., Weissenberger, F., Rong, K., Tanno, R., et al. Towards an ai co-scientist. _arXiv preprint arXiv:2502.18864_, 2025.
224
+ * Greenstein & Zhu (2012) Greenstein, S. and Zhu, F. Is wikipedia biased? _American Economic Review_, 102(3):343–348, 2012.
225
+ * He & Liu (2025) He, J. and Liu, J. Investigating the impact of llm personality on cognitive bias manifestation in automated decision-making tasks. _arXiv preprint arXiv:2502.14219_, 2025.
226
+ * Heston & Gillette (2025) Heston, T.F. and Gillette, J. Do large language models have a personality? a psychometric evaluation with implications for clinical medicine and mental health ai. _medRxiv_, pp. 2025–03, 2025.
227
+ * Ivanov & Penchev (2024) Ivanov, T. and Penchev, V. Ai benchmarks and datasets for llm evaluation. _arXiv preprint arXiv:2412.01020_, 2024.
228
+ * Jiang et al. (2023) Jiang, H., Zhang, X., Cao, X., Breazeal, C., Roy, D., and Kabbara, J. Personallm: Investigating the ability of large language models to express personality traits. _arXiv preprint arXiv:2305.02547_, 2023.
229
+ * Johnson et al. (2019) Johnson, J., Douze, M., and Jégou, H. Billion-scale similarity search with GPUs. _IEEE Transactions on Big Data_, 7(3):535–547, 2019.
230
+ * Khamassi et al. (2024) Khamassi, M., Nahon, M., and Chatila, R. Strong and weak alignment of large language models with human values. _Scientific Reports_, 14(1):19399, 2024.
231
+ * Kourani et al. (2022) Kourani, H., Di Francescomarino, C., Ghidini, C., van der Aalst, W., and van Zelst, S. Mining for long-term dependencies in causal graphs. In _International Conference on Business Process Management_, pp. 117–131. Springer, 2022.
232
+ * Krumdick et al. (2025) Krumdick, M., Lovering, C., Reddy, V., Ebner, S., and Tanner, C. No free labels: Limitations of llm-as-a-judge without human grounding. _arXiv preprint arXiv:2503.05061_, 2025.
233
+ * Kumar et al. (2025) Kumar, K., Ashraf, T., Thawakar, O., Anwer, R.M., Cholakkal, H., Shah, M., Yang, M.-H., Torr, P.H., Khan, F.S., and Khan, S. Llm post-training: A deep dive into reasoning large language models. _arXiv preprint arXiv:2502.21321_, 2025.
234
+ * Kuratov et al. (2024) Kuratov, Y., Bulatov, A., Anokhin, P., Rodkin, I., Sorokin, D., Sorokin, A., and Burtsev, M. Babilong: Testing the limits of llms with long context reasoning-in-a-haystack. _Advances in Neural Information Processing Systems_, 37:106519–106554, 2024.
235
+ * Li et al. (2025) Li, X., Cai, Z., Wang, S., Yu, K., and Chen, F. A survey on enhancing causal reasoning ability of large language models. _arXiv preprint arXiv:2503.09326_, 2025.
236
+ * Lindsey et al. (2025) Lindsey, J., Gurnee, W., Ameisen, E., Chen, B., Pearce, A., Turner, N.L., Citro, C., Abrahams, D., Carter, S., Hosmer, B., Marcus, J., Sklar, M., Templeton, A., Bricken, T., McDougall, C., Cunningham, H., Henighan, T., Jermyn, A., Jones, A., Persic, A., Qi, Z., Thompson, T.B., Zimmerman, S., Rivoire, K., Conerly, T., Olah, C., and Batson, J. On the biology of a large language model. _Transformer Circuits Thread_, 2025. URL [https://transformer-circuits.pub/2025/attribution-graphs/biology.html](https://transformer-circuits.pub/2025/attribution-graphs/biology.html).
237
+ * Ling et al. (2025) Ling, Z., Liu, K., Yan, K., Yang, Y., Lin, W., Fan, T.-H., Shen, L., Du, Z., and Chen, J. Longreason: A synthetic long-context reasoning benchmark via context expansion. _arXiv preprint arXiv:2501.15089_, 2025.
238
+ * Liu & Luo (2024) Liu, Z. and Luo, Y. Document-level event extraction with definition-driven icl. _arXiv preprint arXiv:2408.05566_, 2024.
239
+ * Liu et al. (2018) Liu, Z., Xiong, C., Mitamura, T., and Hovy, E. Automatic event salience identification. _arXiv preprint arXiv:1809.00647_, 2018.
240
+ * Meadows et al. (2024) Meadows, G.I., Lau, N. W.L., Susanto, E.A., Yu, C.L., and Paul, A. Localvaluebench: A collaboratively built and extensible benchmark for evaluating localized value alignment and ethical safety in large language models. _arXiv preprint arXiv:2408.01460_, 2024.
241
+ * Modarressi et al. (2025) Modarressi, A., Deilamsalehy, H., Dernoncourt, F., Bui, T., Rossi, R.A., Yoon, S., and Schütze, H. Nolima: Long-context evaluation beyond literal matching. _arXiv preprint arXiv:2502.05167_, 2025.
242
+ * Nussbaum et al. (2024) Nussbaum, Z., Morris, J.X., Duderstadt, B., and Mulyar, A. Nomic embed: Training a reproducible long context text embedder. _arXiv preprint arXiv:2402.01613_, 2024.
243
+ * O’Neill et al. (2025) O’Neill, C., Ghosal, T., Răileanu, R., Walmsley, M., Bui, T., Schawinski, K., and Ciucă, I. Sparks of science: Hypothesis generation using structured paper data. _arXiv preprint arXiv:2504.12976_, 2025.
244
+ * Otake et al. (2020) Otake, T., Yokoi, S., Inoue, N., Takahashi, R., Kuribayashi, T., and Inui, K. Modeling event salience in narratives via barthes’ cardinal functions. _arXiv preprint arXiv:2011.01785_, 2020.
245
+ * Plum et al. (2019) Plum, A., Zampieri, M., Orasan, C., Wandl-Vogt, E., and Mitkov, R. Large-scale data harvesting for biographical data. 2019.
246
+ * Shao et al. (2024) Shao, Y., Jiang, Y., Kanell, T.A., Xu, P., Khattab, O., and Lam, M.S. Assisting in writing wikipedia-like articles from scratch with large language models. _arXiv preprint arXiv:2402.14207_, 2024.
247
+ * Sorokovikova et al. (2024) Sorokovikova, A., Fedorova, N., Rezagholi, S., and Yamshchikov, I.P. Llms simulate big five personality traits: Further evidence. _arXiv preprint arXiv:2402.01765_, 2024.
248
+ * Stranisci et al. (2022) Stranisci, M.A., Mensa, E., Diakite, O., Radicioni, D., and Damiano, R. Guidelines and a corpus for extracting biographical events. _arXiv preprint arXiv:2206.03547_, 2022.
249
+ * Su et al. (2025) Su, Y., Zhang, H., Zhang, G., Wang, Y., Fan, Y., Li, R., and Wang, Y. Enhancing event causality identification with llm knowledge and concept-level event relations. In _Proceedings of the 31st International Conference on Computational Linguistics_, pp. 7403–7414, 2025.
250
+ * Talat et al. (2022) Talat, Z., Névéol, A., Biderman, S., Clinciu, M., Dey, M., Longpre, S., Luccioni, S., Masoud, M., Mitchell, M., Radev, D., Sharma, S., Subramonian, A., Tae, J., Tan, S., Tunuguntla, D., and Van Der Wal, O. You reap what you sow: On the challenges of bias evaluation under multilingual settings. In Fan, A., Ilic, S., Wolf, T., and Gallé, M. (eds.), _Proceedings of BigScience Episode #5 – Workshop on Challenges & Perspectives in Creating Large Language Models_, pp. 26–41, virtual+Dublin, May 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.bigscience-1.3. URL [https://aclanthology.org/2022.bigscience-1.3/](https://aclanthology.org/2022.bigscience-1.3/).
251
+ * Wang et al. (2025) Wang, Y., Zhao, J., Ones, D.S., He, L., and Xu, X. Evaluating the ability of large language models to emulate personality. _Scientific reports_, 15(1):519, 2025.
252
+ * Yang et al. (2024) Yang, S., Zhu, S., Bao, R., Liu, L., Cheng, Y., Hu, L., Li, M., and Wang, D. What makes your model a low-empathy or warmth person: Exploring the origins of personality in llms. _arXiv preprint arXiv:2410.10863_, 2024.
253
+ * Zhang et al. (2021) Zhang, X., Chen, M., and May, J. Salience-aware event chain modeling for narrative understanding. _arXiv preprint arXiv:2109.10475_, 2021.
254
+ * Zhang et al. (2024) Zhang, X.F., Blum, C., Choji, T., Shah, S., and Vempala, A. Ultra: Unleash llms’ potential for event argument extraction through hierarchical modeling and pair-wise refinement. _arXiv preprint arXiv:2401.13218_, 2024.
255
+
256
+ Appendix A Appendix
257
+ -------------------
258
+
259
+ ### A.1 Prompts
260
+
261
+ ### A.2 Scientific Discovery Categories
262
+
263
+ Table 6: Comprehensive mapping of all o3‑generated labels to the three event‑type categories used in our analysis.
264
+
265
+ Table 7: Scientific Discovery Critical Events Analysis by Model
266
+
267
+ Table 8: Scientific Discovery Critical Events Analysis by Model (continued)
268
+
269
+ Table 9: Scientific Discovery Critical Events Analysis by Model (continued)
270
+
271
+ Table 10: Scientific Discovery Critical Events Analysis by Model (continued)
272
+
273
+ Table 11: Scientific Discovery Critical Events Analysis by Model (continued)
274
+
275
+ Table 12: Scientific Discovery Critical Events Analysis by Model (continued)
276
+
277
+ Table 13: Scientific Discovery Critical Events Analysis by Model (continued)
278
+
279
+ Table 14: Scientific Discovery Critical Events Analysis by Model (continued)
280
+
281
+ Table 15: Scientific Discovery Critical Events Analysis by Model (continued)
282
+
283
+ Table 16: Scientific Discovery Critical Events Analysis by Model (continued)
284
+
285
+ Table 17: Scientific Discovery Critical Events Analysis by Model (continued)
286
+
287
+ Table 18: Scientific Discovery Critical Events Analysis by Model (continued)
288
+
289
+ Table 19: Scientific Discovery Critical Events Analysis by Model (continued)
290
+
291
+ Table 20: Scientific Discovery Critical Events Analysis by Model (continued)
292
+
293
+ Table 21: Scientific Discovery Critical Events Analysis by Model (continued)
294
+
295
+ Table 22: Scientific Discovery Critical Events Analysis by Model (continued)
296
+
297
+ Table 23: Scientific Discovery Critical Events Analysis by Model (continued)
298
+
299
+ Table 24: Scientific Discovery Critical Events Analysis by Model (continued)
300
+
301
+ Table 25: Scientific Discovery Critical Events Analysis by Model (continued)
302
+
303
+ Table 26: Scientific Discovery Critical Events Analysis by Model
304
+
305
+ Table 27: Scientific Discovery Critical Events Analysis by Model
306
+
307
+ Table 28: Scientific Discovery Critical Events Analysis by Model
308
+
309
+ Table 29: Scientific Discovery Critical Events Analysis by Model
310
+
311
+ Cellular Oxygen Sensing
312
+ o3
313
+ Model Label: Mechanistic-Keystone
314
+ 1. Prolyl-hydroxylation switch revealed (2001)
315
+ 2. VHL targets HIF-α 𝛼\alpha italic_α for destruction (1999)
316
+ 3. HIF-1 purified and cloned as an O 2 subscript O 2\rm O_{2}roman_O start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT-labile heterodimer (1995)
317
+ 4. Hypoxia Response Element mapped in the EPO gene (1991)
318
+ 5. O 2 subscript O 2\rm O_{2}roman_O start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT-sensing machinery shown to be ubiquitous (1993)
319
+ Gemini 2.5 Pro
320
+ Model Label: Mechanism Matters Most
321
+ 1. Prolyl Hydroxylation Identified as the Oxygen-Dependent Switch (2001)
322
+ 2. VHL Linked to Oxygen-Dependent HIF-1 α 𝛼\alpha italic_α Degradation (1999)
323
+ 3. Discovery of Widespread Oxygen Sensing Machinery (1993)
324
+ 4. Biochemical Purification and Cloning of HIF-1 α 𝛼\alpha italic_α (1995)
325
+ 5. Identification of the Hypoxia Response Element (HRE) (1991-1992)
326
+ Claude Sonnet 3.7 Thinking
327
+ Model Label: Mechanistic Convergence
328
+ 1. Discovery of prolyl hydroxylation as the oxygen-sensing switch (2001)
329
+ 2. Discovery that VHL targets HIF-α 𝛼\alpha italic_α for oxygen-dependent degradation (1999)
330
+ 3. Identification and purification of HIF-1 as a heterodimer (1992-1995)
331
+ 4. Linking VHL disease to hypoxia-regulated genes (1996)
332
+ 5. Evidence for a widespread oxygen-sensing mechanism (1993)
333
+
334
+ Table 30: Scientific Discovery Critical Events Analysis by Model
335
+
336
+ Table 31: Scientific Discovery Critical Events Analysis by Model
337
+
338
+ Table 32: Movie Script Critical Events Analysis by Model
339
+
340
+ Table 33: Movie Script Critical Events Analysis by Model (continued)
341
+
342
+ Table 34: Movie Script Critical Events Analysis by Model (continued)
343
+
344
+ Table 35: Movie Script Critical Events Analysis by Model (continued)
345
+
346
+ Table 36: Movie Script Critical Events Analysis by Model (continued)
347
+
348
+ Table 37: Movie Script Critical Events Analysis by Model (continued)
349
+
350
+ Table 38: Movie Script Critical Events Analysis by Model (continued)