EvasionBench / README.md
FutureMa's picture
Update README.md
6d55a70 verified
---
license: apache-2.0
task_categories:
- text-classification
language:
- en
tags:
- finance
- earnings-calls
- evasion-detection
- nlp
pretty_name: EvasionBench
size_categories:
- 10K<n<100K
configs:
- config_name: default
data_files: "evasionbench_17k_eva4b_labels_dedup.parquet"
---
# EvasionBench
<p align="center">
<a href="https://iiiiqiiii.github.io/EvasionBench"><img src="https://img.shields.io/badge/Project-Page-blue?style=for-the-badge" alt="Project Page"></a>
<a href="https://huggingface.co/FutureMa/Eva-4B-V2"><img src="https://img.shields.io/badge/🤗-Model-yellow?style=for-the-badge" alt="Model"></a>
<a href="https://github.com/IIIIQIIII/EvasionBench"><img src="https://img.shields.io/badge/GitHub-Repo-black?style=for-the-badge&logo=github" alt="GitHub"></a>
<a href="https://colab.research.google.com/github/IIIIQIIII/EvasionBench/blob/main/scripts/eva4b_inference.ipynb"><img src="https://img.shields.io/badge/Colab-Quick_Start-F9AB00?style=for-the-badge&logo=googlecolab" alt="Open In Colab"></a>
<a href="https://arxiv.org/abs/2601.09142"><img src="https://img.shields.io/badge/arXiv-Paper-red?style=for-the-badge" alt="Paper"></a>
</p>
EvasionBench is a benchmark dataset for detecting **evasive answers** in earnings call Q&A sessions. The task is to classify how directly corporate management addresses questions from financial analysts.
## Dataset Description
- **Repository:** [https://github.com/IIIIQIIII/EvasionBench](https://github.com/IIIIQIIII/EvasionBench)
- **Paper:** [https://arxiv.org/abs/2601.09142](https://arxiv.org/abs/2601.09142)
- **Point of Contact:** [GitHub Issues](https://github.com/IIIIQIIII/EvasionBench/issues)
### Dataset Summary
This dataset contains 16,726 question-answer pairs from earnings call transcripts, each labeled with one of three evasion levels. The labels were generated using the Eva-4B-V2 model, a fine-tuned classifier specifically trained for financial discourse evasion detection.
### Supported Tasks
- **Text Classification**: Classify the directness of management responses to analyst questions.
### Languages
English
## Dataset Structure
### Data Fields
| Field | Type | Description |
|-------|------|-------------|
| `uid` | string | Unique identifier for each sample |
| `question` | string | Analyst's question during the earnings call |
| `answer` | string | Management's response to the question |
| `eva4b_label` | string | Evasion label: `direct`, `intermediate`, or `fully_evasive` |
### Label Definitions
| Label | Definition | Description |
|-------|------------|-------------|
| `direct` | The core question is directly and explicitly answered | Clear figures, "Yes/No" stance, or direct explanations |
| `intermediate` | The response provides related context but sidesteps the specific core | Hedging, providing a range instead of a point, or answering adjacent topics |
| `fully_evasive` | The question is ignored, explicitly refused, or the response is entirely off-topic | Explicit refusal, total redirection, or irrelevant responses |
### Data Statistics
| Metric | Value |
|--------|-------|
| Total Samples | 16,726 |
| Direct | 8,749 (52.3%) |
| Intermediate | 7,359 (44.0%) |
| Fully Evasive | 618 (3.7%) |
### Example
```python
{
"uid": "4addbff893b81f64131fdc712d7a6d9a",
"question": "What is the expected margin for Q4?",
"answer": "We expect it to be 32%.",
"eva4b_label": "direct"
}
```
## Usage
### Loading the Dataset
```python
from datasets import load_dataset
dataset = load_dataset("FutureMa/EvasionBench")
```
### Loading from Parquet
```python
import pandas as pd
df = pd.read_parquet("evasionbench_17k_eva4b_labels_dedup.parquet")
```
### Quick Start: Inference with Eva-4B-V2
````python
from datasets import load_dataset
from transformers import pipeline
# Load dataset
dataset = load_dataset("FutureMa/EvasionBench")
# Load model using text-generation pipeline
pipe = pipeline("text-generation", model="FutureMa/Eva-4B-V2", device_map="auto")
# Get a sample
sample = dataset["train"][0]
question = sample["question"]
answer = sample["answer"]
# Prepare prompt
prompt = f"""You are a financial analyst. Your task is to Detect Evasive Answers in Financial Q&A
Question: {question}
Answer: {answer}
Response format:
```json
{{"label": "direct|intermediate|fully_evasive"}}
```
Answer in ```json content, no other text"""
# Run inference
result = pipe(prompt, max_new_tokens=64, do_sample=False)
print(result[0]["generated_text"])
````
For a complete inference example with batch processing and evaluation, see our [Colab notebook](https://colab.research.google.com/github/IIIIQIIII/EvasionBench/blob/main/scripts/eva4b_inference.ipynb).
## Dataset Creation
### Source Data
The question-answer pairs are derived from publicly available earnings call transcripts.
### Annotation Process
Labels were generated using **Eva-4B-V2**, a 4B parameter model fine-tuned specifically for evasion detection in financial discourse. Eva-4B-V2 achieves 84.9% Macro-F1 on the EvasionBench evaluation set, outperforming frontier LLMs including Claude Opus 4.5 and Gemini 3 Flash.
## Considerations for Using the Data
### Social Impact
This dataset can be used to:
- Improve transparency in corporate communications
- Assist financial analysts in identifying evasive responses
- Support research in financial NLP and discourse analysis
### Limitations
- Labels are model-generated (Eva-4B-V2) rather than human-annotated
- The dataset reflects the distribution of evasion patterns in the source data
- Performance may vary across different industries or time periods
## Citation
If you use this dataset, please cite:
```bibtex
@misc{ma2026evasionbenchlargescalebenchmarkdetecting,
title={EvasionBench: A Large-Scale Benchmark for Detecting Managerial Evasion in Earnings Call Q&A},
author={Shijian Ma and Yan Lin and Yi Yang},
year={2026},
eprint={2601.09142},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2601.09142}
}
```
## License
Apache 2.0