File size: 2,905 Bytes
93389b6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
05baddf
 
 
 
 
 
 
 
 
 
 
 
 
 
93389b6
 
 
894196b
93389b6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
---
license: mit
task_categories:
- question-answering
language:
- en
tags:
- finance
- table-text
- numerical_reasoning
size_categories:
- < 1K
---


# SECQUE

- [**Paper**](https://arxiv.org/abs/2504.04596)

SECQUE is a comprehensive benchmark for evaluating large language models (LLMs) in financial analysis tasks.

SECQUE comprises 565 expert-written questions covering SEC filings analysis across four key categories: 
- comparison analysis
- ratio calculation
- risk assessment
- financial insight generation.

To assess model performance, we develop SECQUE-Judge, an evaluation mechanism leveraging multiple LLM-based judges, which demonstrates strong alignment with human evaluations. 
Additionally, we provide an extensive analysis of various models’ performance on our benchmark. 


## Results

| Model                        | Baseline          | Financial        | Baseline CoT       | Financial CoT      | Flipped           | Avg Tokens by Model |
|------------------------------|-------------------|------------------|--------------------|--------------------|-------------------|---------------------|
| GPT-4o                       | **_0.69_**/0.79   | 0.62/0.71        | 0.67/0.76          | 0.63/0.73          | 0.68/0.78         | 319.84              |
| GPT-4o-mini                  | _0.64_/0.73       | 0.38/0.47        | 0.60/0.72          | 0.56/0.65          | 0.62/0.73         | 289.76              |
| Llama-3.3-70B-Instruct       | _0.65_/0.75       | 0.60/0.71        | 0.63/0.74          | 0.60/0.72          | 0.62/0.74         | 341.63              |
| Qwen2.5-32B-Instruct         | 0.61/0.72         | 0.49/0.58        | 0.60/0.71          | 0.55/0.67          | _0.65_/0.75       | 331.34              |
| Phi-4                        | 0.56/0.66         | 0.55/0.64        | _0.57_/0.67        | 0.56/0.66          | _0.57_/0.67       | 294.33              |
| Meta-Llama-3.1-8B-Instruct   | _0.48_/0.60       | 0.41/0.54        | 0.44/0.56          | 0.40/0.53          | 0.47/0.59         | 338.38              |
| Mistral-Nemo-Instruct-2407   | _0.46_/0.55       | 0.32/0.42        | 0.45/0.56          | 0.44/0.55          | 0.44/0.54         | 231.52              |
| Avg Tokens by Prompt         | 283.04            | 151.97           | 437.38             | 334.71             | 317.57            | 304.93              |

## Citation

```bash
@inproceedings{
    title = "SECQUE: A Benchmark for Evaluating Real-World Financial Analysis Capabilitiese",
    author = "Ben Yoash, Noga  and
      Brief, Meni and
      Ovadia, Oded  and
      Shenderovitz, Gil  and
      Mishaeli, Moshik  and
      Lemberg, Rachel  and
      Sheetrit, Eitam",
    month = apr,
    year = "2025",
    url = "https://arxiv.org/pdf/2504.04596",
}
```

## Evaluation Benchmark notice

This benchmark is indented solely for evaluation, and must not be used for training in any way.