Spaces:
Running
Running
File size: 23,477 Bytes
f781593 227ea73 8a8f34e 227ea73 b61db1b 227ea73 b61db1b 227ea73 b61db1b 227ea73 b61db1b 227ea73 b61db1b 227ea73 b61db1b 227ea73 b61db1b 227ea73 b61db1b 227ea73 b61db1b 227ea73 b61db1b 227ea73 b61db1b 227ea73 b61db1b 227ea73 b61db1b 227ea73 b61db1b 227ea73 b61db1b 227ea73 b61db1b 227ea73 b61db1b 227ea73 b61db1b 227ea73 b61db1b 227ea73 b61db1b 227ea73 b61db1b 227ea73 b61db1b 227ea73 b61db1b 227ea73 f781593 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 |
<link rel="stylesheet" href="static/css/tooltips.css">
<style>
.tooltip-right:hover::after {
left: auto \!important;
right: 100% \!important;
margin-left: 0 \!important;
margin-right: 10px \!important;
}
</style>
<!-- Sentiment Analysis -->
<div id="sentiment-analysis" class="tab-content">
<h2 class="title is-4">Sentiment Analysis Task Results</h2>
<div class="results-table">
<table class="table is-bordered is-striped is-narrow is-hoverable is-fullwidth">
<thead>
<tr>
<th rowspan="2">Model</th>
<th colspan="3" class="has-text-centered tooltip-trigger" data-title="FiQA Task 1" data-tooltip="FiQA Task 1 focuses on aspect-based financial sentiment analysis in microblog posts and news headlines using a continuous scale from -1 (negative) to 1 (positive). The regression task requires models to accurately predict the sentiment score that reflects investor perception of financial texts.">FiQA Task 1</th>
<th colspan="4" class="has-text-centered tooltip-trigger" data-title="Financial Phrase Bank" data-tooltip="Financial Phrase Bank (FPB) contains 4,840 sentences from financial news articles categorized as positive, negative, or neutral by 16 finance experts using majority voting. The sentiment classification task requires understanding how these statements might influence investor perception of stock prices.">Financial Phrase Bank (FPB)</th>
<th colspan="4" class="has-text-centered tooltip-trigger tooltip-right" style="position: relative;" data-title="SubjECTive-QA" data-tooltip="SubjECTive-QA contains 49,446 annotations across 2,747 question-answer pairs extracted from 120 earnings call transcripts. The multi-label classification task involves analyzing six subjective features in financial discourse: assertiveness, cautiousness, optimism, specificity, clarity, and relevance.">SubjECTive-QA</th>
</tr>
<tr>
<th class="has-text-centered">MSE</th>
<th class="has-text-centered">MAE</th>
<th class="has-text-centered">r² Score</th>
<th class="has-text-centered">Accuracy</th>
<th class="has-text-centered">Precision</th>
<th class="has-text-centered">Recall</th>
<th class="has-text-centered">F1</th>
<th class="has-text-centered">Precision</th>
<th class="has-text-centered">Recall</th>
<th class="has-text-centered">F1</th>
<th class="has-text-centered">Accuracy</th>
</tr>
</thead>
<tbody>
<tr>
<td class="tooltip-trigger" data-title="Llama 3 70B Instruct" data-tooltip="Meta's advanced 70 billion parameter dense language model optimized for instruction-following tasks. Available through Together AI and notable for complex reasoning capabilities.">Llama 3 70B Instruct</td>
<td class="has-text-centered">0.123</td>
<td class="has-text-centered">0.290</td>
<td class="has-text-centered">0.272</td>
<td class="has-text-centered">0.901</td>
<td class="has-text-centered">0.904</td>
<td class="has-text-centered">0.901</td>
<td class="has-text-centered">0.902</td>
<td class="has-text-centered">0.652</td>
<td class="has-text-centered">0.573</td>
<td class="has-text-centered">0.535</td>
<td class="has-text-centered">0.573</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="Llama 3 8B Instruct" data-tooltip="Meta's efficient 8 billion parameter language model optimized for instruction-following. Balances performance and efficiency for financial tasks with reasonable reasoning capabilities.">Llama 3 8B Instruct</td>
<td class="has-text-centered">0.161</td>
<td class="has-text-centered">0.344</td>
<td class="has-text-centered">0.045</td>
<td class="has-text-centered">0.738</td>
<td class="has-text-centered">0.801</td>
<td class="has-text-centered">0.738</td>
<td class="has-text-centered">0.698</td>
<td class="has-text-centered">0.635</td>
<td class="has-text-centered">0.625</td>
<td class="has-text-centered performance-best">0.600</td>
<td class="has-text-centered">0.625</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="DBRX Instruct" data-tooltip="Databricks' 132 billion parameter Mixture of Experts (MoE) model focused on advanced reasoning. Demonstrates competitive performance on financial tasks with strong text processing capabilities.">DBRX Instruct</td>
<td class="has-text-centered">0.160</td>
<td class="has-text-centered">0.321</td>
<td class="has-text-centered">0.052</td>
<td class="has-text-centered">0.524</td>
<td class="has-text-centered">0.727</td>
<td class="has-text-centered">0.524</td>
<td class="has-text-centered">0.499</td>
<td class="has-text-centered">0.654</td>
<td class="has-text-centered">0.541</td>
<td class="has-text-centered">0.436</td>
<td class="has-text-centered">0.541</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="DeepSeek LLM (67B)" data-tooltip="DeepSeek's 67 billion parameter model optimized for chat applications. Balances performance and efficiency across financial tasks with solid reasoning capabilities.">DeepSeek LLM (67B)</td>
<td class="has-text-centered">0.118</td>
<td class="has-text-centered">0.278</td>
<td class="has-text-centered">0.302</td>
<td class="has-text-centered">0.815</td>
<td class="has-text-centered">0.867</td>
<td class="has-text-centered">0.815</td>
<td class="has-text-centered">0.811</td>
<td class="has-text-centered">0.676</td>
<td class="has-text-centered">0.544</td>
<td class="has-text-centered">0.462</td>
<td class="has-text-centered">0.544</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="Gemma 2 27B" data-tooltip="Google's open-weight 27 billion parameter model optimized for reasoning tasks. Balances performance and efficiency across financial domains with strong instruction-following.">Gemma 2 27B</td>
<td class="has-text-centered performance-best">0.100</td>
<td class="has-text-centered performance-best">0.266</td>
<td class="has-text-centered">0.406</td>
<td class="has-text-centered">0.890</td>
<td class="has-text-centered">0.896</td>
<td class="has-text-centered">0.890</td>
<td class="has-text-centered">0.884</td>
<td class="has-text-centered">0.562</td>
<td class="has-text-centered">0.524</td>
<td class="has-text-centered">0.515</td>
<td class="has-text-centered">0.524</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="Gemma 2 9B" data-tooltip="Google's efficient open-weight 9 billion parameter model. Demonstrates good performance on financial tasks relative to its smaller size.">Gemma 2 9B</td>
<td class="has-text-centered">0.189</td>
<td class="has-text-centered">0.352</td>
<td class="has-text-centered">-0.120</td>
<td class="has-text-centered performance-strong">0.940</td>
<td class="has-text-centered performance-strong">0.941</td>
<td class="has-text-centered performance-strong">0.940</td>
<td class="has-text-centered performance-strong">0.940</td>
<td class="has-text-centered">0.570</td>
<td class="has-text-centered">0.499</td>
<td class="has-text-centered">0.491</td>
<td class="has-text-centered">0.499</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="Mistral (7B) Instruct v0.3" data-tooltip="Mistral AI's 7 billion parameter instruction-tuned model. Demonstrates impressive efficiency with reasonable performance on financial tasks despite its smaller size.">Mistral (7B) Instruct v0.3</td>
<td class="has-text-centered">0.135</td>
<td class="has-text-centered">0.278</td>
<td class="has-text-centered">0.200</td>
<td class="has-text-centered">0.847</td>
<td class="has-text-centered">0.854</td>
<td class="has-text-centered">0.847</td>
<td class="has-text-centered">0.841</td>
<td class="has-text-centered">0.607</td>
<td class="has-text-centered">0.542</td>
<td class="has-text-centered">0.522</td>
<td class="has-text-centered">0.542</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="Mixtral-8x22B Instruct" data-tooltip="Mistral AI's 141 billion parameter MoE model with eight 22B expert networks. Features robust reasoning capabilities for financial tasks with strong instruction-following performance.">Mixtral-8x22B Instruct</td>
<td class="has-text-centered">0.221</td>
<td class="has-text-centered">0.364</td>
<td class="has-text-centered">-0.310</td>
<td class="has-text-centered">0.768</td>
<td class="has-text-centered">0.845</td>
<td class="has-text-centered">0.768</td>
<td class="has-text-centered">0.776</td>
<td class="has-text-centered">0.614</td>
<td class="has-text-centered">0.538</td>
<td class="has-text-centered">0.510</td>
<td class="has-text-centered">0.538</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="Mixtral-8x7B Instruct" data-tooltip="Mistral AI's 47 billion parameter MoE model with eight 7B expert networks. Balances efficiency and performance with reasonable financial reasoning capabilities.">Mixtral-8x7B Instruct</td>
<td class="has-text-centered">0.208</td>
<td class="has-text-centered">0.307</td>
<td class="has-text-centered">-0.229</td>
<td class="has-text-centered">0.896</td>
<td class="has-text-centered">0.898</td>
<td class="has-text-centered">0.896</td>
<td class="has-text-centered">0.893</td>
<td class="has-text-centered">0.611</td>
<td class="has-text-centered">0.518</td>
<td class="has-text-centered">0.498</td>
<td class="has-text-centered">0.518</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="Qwen 2 Instruct (72B)" data-tooltip="Alibaba's 72 billion parameter instruction-following model optimized for reasoning tasks. Features strong performance on financial domains with advanced text processing capabilities.">Qwen 2 Instruct (72B)</td>
<td class="has-text-centered">0.205</td>
<td class="has-text-centered">0.409</td>
<td class="has-text-centered">-0.212</td>
<td class="has-text-centered">0.904</td>
<td class="has-text-centered">0.908</td>
<td class="has-text-centered">0.904</td>
<td class="has-text-centered">0.901</td>
<td class="has-text-centered">0.644</td>
<td class="has-text-centered">0.601</td>
<td class="has-text-centered">0.576</td>
<td class="has-text-centered">0.601</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="WizardLM-2 8x22B" data-tooltip="A 176 billion parameter MoE model focused on complex reasoning. Designed for advanced instruction-following with strong capabilities across financial tasks.">WizardLM-2 8x22B</td>
<td class="has-text-centered">0.129</td>
<td class="has-text-centered">0.283</td>
<td class="has-text-centered">0.239</td>
<td class="has-text-centered">0.765</td>
<td class="has-text-centered">0.853</td>
<td class="has-text-centered">0.765</td>
<td class="has-text-centered">0.779</td>
<td class="has-text-centered">0.611</td>
<td class="has-text-centered">0.570</td>
<td class="has-text-centered">0.566</td>
<td class="has-text-centered">0.570</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="DeepSeek-V3" data-tooltip="DeepSeek's 685 billion parameter Mixture of Experts (MoE) model optimized for advanced reasoning. Strong performance on financial tasks with robust instruction-following capabilities.">DeepSeek-V3</td>
<td class="has-text-centered">0.150</td>
<td class="has-text-centered">0.311</td>
<td class="has-text-centered">0.111</td>
<td class="has-text-centered">0.828</td>
<td class="has-text-centered">0.851</td>
<td class="has-text-centered">0.828</td>
<td class="has-text-centered">0.814</td>
<td class="has-text-centered">0.640</td>
<td class="has-text-centered">0.572</td>
<td class="has-text-centered performance-medium">0.583</td>
<td class="has-text-centered">0.572</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="DeepSeek R1" data-tooltip="DeepSeek's premium 671 billion parameter Mixture of Experts (MoE) model representing their most advanced offering. Designed for state-of-the-art performance across complex reasoning and financial tasks.">DeepSeek R1</td>
<td class="has-text-centered performance-low">0.110</td>
<td class="has-text-centered">0.289</td>
<td class="has-text-centered">0.348</td>
<td class="has-text-centered">0.904</td>
<td class="has-text-centered">0.907</td>
<td class="has-text-centered">0.904</td>
<td class="has-text-centered">0.902</td>
<td class="has-text-centered">0.644</td>
<td class="has-text-centered">0.489</td>
<td class="has-text-centered">0.499</td>
<td class="has-text-centered">0.489</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="QwQ-32B-Preview" data-tooltip="Qwen's experimental 32 billion parameter MoE model focused on efficient computation. Features interesting performance characteristics on certain financial tasks.">QwQ-32B-Preview</td>
<td class="has-text-centered">0.141</td>
<td class="has-text-centered">0.290</td>
<td class="has-text-centered">0.165</td>
<td class="has-text-centered">0.812</td>
<td class="has-text-centered">0.827</td>
<td class="has-text-centered">0.812</td>
<td class="has-text-centered">0.815</td>
<td class="has-text-centered">0.629</td>
<td class="has-text-centered">0.534</td>
<td class="has-text-centered">0.550</td>
<td class="has-text-centered">0.534</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="Jamba 1.5 Mini" data-tooltip="A compact variant in the Jamba model series focused on efficiency. Balances performance and computational requirements for financial tasks.">Jamba 1.5 Mini</td>
<td class="has-text-centered performance-low">0.119</td>
<td class="has-text-centered">0.282</td>
<td class="has-text-centered">0.293</td>
<td class="has-text-centered">0.784</td>
<td class="has-text-centered">0.814</td>
<td class="has-text-centered">0.784</td>
<td class="has-text-centered">0.765</td>
<td class="has-text-centered">0.380</td>
<td class="has-text-centered">0.525</td>
<td class="has-text-centered">0.418</td>
<td class="has-text-centered">0.525</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="Jamba 1.5 Large" data-tooltip="An expanded variant in the Jamba model series with enhanced capabilities. Features stronger reasoning for financial tasks than its smaller counterpart.">Jamba 1.5 Large</td>
<td class="has-text-centered">0.183</td>
<td class="has-text-centered">0.363</td>
<td class="has-text-centered">-0.085</td>
<td class="has-text-centered">0.824</td>
<td class="has-text-centered">0.850</td>
<td class="has-text-centered">0.824</td>
<td class="has-text-centered">0.798</td>
<td class="has-text-centered">0.635</td>
<td class="has-text-centered">0.573</td>
<td class="has-text-centered performance-medium">0.582</td>
<td class="has-text-centered">0.573</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="Claude 3.5 Sonnet" data-tooltip="Anthropic's advanced proprietary language model optimized for complex reasoning and instruction-following. Features enhanced performance on financial tasks with strong text processing capabilities.">Claude 3.5 Sonnet</td>
<td class="has-text-centered performance-low">0.101</td>
<td class="has-text-centered performance-low">0.268</td>
<td class="has-text-centered performance-best">0.402</td>
<td class="has-text-centered performance-best">0.944</td>
<td class="has-text-centered performance-best">0.945</td>
<td class="has-text-centered performance-best">0.944</td>
<td class="has-text-centered performance-best">0.944</td>
<td class="has-text-centered">0.634</td>
<td class="has-text-centered performance-medium">0.585</td>
<td class="has-text-centered">0.553</td>
<td class="has-text-centered performance-medium">0.585</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="Claude 3 Haiku" data-tooltip="Anthropic's smaller efficiency-focused model in the Claude family. Designed for speed and lower computational requirements while maintaining reasonable performance on financial tasks.">Claude 3 Haiku</td>
<td class="has-text-centered">0.167</td>
<td class="has-text-centered">0.349</td>
<td class="has-text-centered">0.008</td>
<td class="has-text-centered">0.907</td>
<td class="has-text-centered">0.913</td>
<td class="has-text-centered">0.907</td>
<td class="has-text-centered">0.908</td>
<td class="has-text-centered">0.619</td>
<td class="has-text-centered">0.538</td>
<td class="has-text-centered">0.463</td>
<td class="has-text-centered">0.538</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="Cohere Command R 7B" data-tooltip="Cohere's 7-billion parameter model focused on instruction-following. An efficient model with reasonable financial domain capabilities for its size.">Cohere Command R 7B</td>
<td class="has-text-centered">0.164</td>
<td class="has-text-centered">0.319</td>
<td class="has-text-centered">0.028</td>
<td class="has-text-centered">0.835</td>
<td class="has-text-centered">0.861</td>
<td class="has-text-centered">0.835</td>
<td class="has-text-centered">0.840</td>
<td class="has-text-centered">0.609</td>
<td class="has-text-centered">0.547</td>
<td class="has-text-centered">0.532</td>
<td class="has-text-centered">0.547</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="Cohere Command R +" data-tooltip="Cohere's enhanced command model with improved instruction-following capabilities. Features advanced reasoning for financial domains with stronger performance than its smaller counterpart.">Cohere Command R +</td>
<td class="has-text-centered performance-medium">0.106</td>
<td class="has-text-centered">0.274</td>
<td class="has-text-centered performance-medium">0.373</td>
<td class="has-text-centered">0.741</td>
<td class="has-text-centered">0.806</td>
<td class="has-text-centered">0.741</td>
<td class="has-text-centered">0.699</td>
<td class="has-text-centered">0.608</td>
<td class="has-text-centered">0.547</td>
<td class="has-text-centered">0.533</td>
<td class="has-text-centered">0.547</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="Google Gemini 1.5 Pro" data-tooltip="Google's advanced proprietary multimodal model designed for complex reasoning and instruction-following tasks. Features strong performance across financial domains with advanced reasoning capabilities.">Google Gemini 1.5 Pro</td>
<td class="has-text-centered">0.144</td>
<td class="has-text-centered">0.329</td>
<td class="has-text-centered">0.149</td>
<td class="has-text-centered">0.890</td>
<td class="has-text-centered">0.895</td>
<td class="has-text-centered">0.890</td>
<td class="has-text-centered">0.885</td>
<td class="has-text-centered">0.642</td>
<td class="has-text-centered performance-medium">0.587</td>
<td class="has-text-centered performance-best">0.593</td>
<td class="has-text-centered performance-best">0.587</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="OpenAI gpt-4o" data-tooltip="OpenAI's flagship multimodal model optimized for a balance of quality and speed. Features strong performance across diverse tasks with capabilities for complex financial reasoning and instruction following.">OpenAI gpt-4o</td>
<td class="has-text-centered">0.184</td>
<td class="has-text-centered">0.317</td>
<td class="has-text-centered">-0.089</td>
<td class="has-text-centered">0.929</td>
<td class="has-text-centered">0.931</td>
<td class="has-text-centered">0.929</td>
<td class="has-text-centered">0.928</td>
<td class="has-text-centered">0.639</td>
<td class="has-text-centered">0.515</td>
<td class="has-text-centered">0.541</td>
<td class="has-text-centered">0.515</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="OpenAI o1-mini" data-tooltip="OpenAI's smaller advanced model balancing efficiency and performance. Demonstrates surprisingly strong results on financial tasks despite its reduced parameter count.">OpenAI o1-mini</td>
<td class="has-text-centered performance-medium">0.120</td>
<td class="has-text-centered">0.295</td>
<td class="has-text-centered">0.289</td>
<td class="has-text-centered">0.918</td>
<td class="has-text-centered">0.917</td>
<td class="has-text-centered">0.918</td>
<td class="has-text-centered">0.917</td>
<td class="has-text-centered performance-best">0.660</td>
<td class="has-text-centered">0.515</td>
<td class="has-text-centered">0.542</td>
<td class="has-text-centered">0.515</td>
</tr>
</tbody>
</table>
<div class="content is-small mt-4">
<p><strong>Note:</strong> Color highlighting indicates performance ranking:
<span class="performance-best"> Best </span>,
<span class="performance-medium"> Strong </span>,
<span class="performance-low"> Good </span>
</p>
</div>
</div>
</div><script src="static/js/tooltips.js"></script>
<script src="static/js/fixed-tooltips.js"></script>
|