nogabenyoash commited on
Commit
05baddf
·
verified ·
1 Parent(s): 93389b6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -0
README.md CHANGED
@@ -28,6 +28,20 @@ SECQUE comprises 565 expert-written questions covering SEC filings analysis acro
28
  To assess model performance, we develop SECQUE-Judge, an evaluation mechanism leveraging multiple LLM-based judges, which demonstrates strong alignment with human evaluations.
29
  Additionally, we provide an extensive analysis of various models’ performance on our benchmark.
30
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
31
  ## Citation
32
 
33
  ```bash
 
28
  To assess model performance, we develop SECQUE-Judge, an evaluation mechanism leveraging multiple LLM-based judges, which demonstrates strong alignment with human evaluations.
29
  Additionally, we provide an extensive analysis of various models’ performance on our benchmark.
30
 
31
+
32
+ ## Results
33
+
34
+ | Model | Baseline | Financial | Baseline CoT | Financial CoT | Flipped | Avg Tokens by Model |
35
+ |------------------------------|-------------------|------------------|--------------------|--------------------|-------------------|---------------------|
36
+ | GPT-4o | **_0.69_**/0.79 | 0.62/0.71 | 0.67/0.76 | 0.63/0.73 | 0.68/0.78 | 319.84 |
37
+ | GPT-4o-mini | _0.64_/0.73 | 0.38/0.47 | 0.60/0.72 | 0.56/0.65 | 0.62/0.73 | 289.76 |
38
+ | Llama-3.3-70B-Instruct | _0.65_/0.75 | 0.60/0.71 | 0.63/0.74 | 0.60/0.72 | 0.62/0.74 | 341.63 |
39
+ | Qwen2.5-32B-Instruct | 0.61/0.72 | 0.49/0.58 | 0.60/0.71 | 0.55/0.67 | _0.65_/0.75 | 331.34 |
40
+ | Phi-4 | 0.56/0.66 | 0.55/0.64 | _0.57_/0.67 | 0.56/0.66 | _0.57_/0.67 | 294.33 |
41
+ | Meta-Llama-3.1-8B-Instruct | _0.48_/0.60 | 0.41/0.54 | 0.44/0.56 | 0.40/0.53 | 0.47/0.59 | 338.38 |
42
+ | Mistral-Nemo-Instruct-2407 | _0.46_/0.55 | 0.32/0.42 | 0.45/0.56 | 0.44/0.55 | 0.44/0.54 | 231.52 |
43
+ | Avg Tokens by Prompt | 283.04 | 151.97 | 437.38 | 334.71 | 317.57 | 304.93 |
44
+
45
  ## Citation
46
 
47
  ```bash