# ๐ค SoMe: A Realistic Benchmark for LLM-based Social Media Agents
[](https://github.com/LivXue/SoMe)
[](https://huggingface.co/datasets/LivXue/SoMe)
[](https://arxiv.org/pdf/2512.14720)
---
## ๐ Overview
SoMe is a comprehensive benchmark designed to evaluate the capabilities of Large Language Model (LLM)-based agents in realistic social media scenarios. This benchmark provides a standardized framework for testing and comparing social media agents across multiple dimensions of performance.
SoMe comprises a diverse collection of:
- **8 social media agent tasks**
- **9,164,284 posts** from various social media platforms
- **6,591 user profiles** with rich behavioral data
- **25,686 reports** from external websites
- **17,869 meticulously annotated task queries**
---
## ๐ฐ News
- **[2025.11]** ๐ Our paper is accepted by AAAI 2026!
---
## โจ Features
SoMe benchmark evaluates social media agents across 8 key tasks, covering diverse aspects of social media intelligence:
| Task Category | Task Name | Description |
|---------------|-----------|-------------|
| **Post-centered** | ๐จ Realtime Event Detection (RED) | Identify and track emerging events in real-time |
| **Post-centered** | ๐ Streaming Event Summary (SES) | Summarize ongoing events from streaming data |
| **Post-centered** | ๐ซ Misinformation Detection (MID) | Identify and flag potentially false or misleading information |
| **User-centered** | ๐ฏ User Behavior Prediction (UBP) | Predict user interactions with social media content |
| **User-centered** | ๐ User Emotion Analysis (UEA) | Analyze user emotions towards social media content |
| **User-centered** | ๐ฌ User Comment Simulation (UCS) | Simulate realistic user comments |
| **Comprehensive** | ๐ฑ Media Content Recommendation (MCR) | Recommend relevant media content based on user interests |
| **Comprehensive** | โ Social Media Question-Answering (SMQ) | Accurately answer questions about social media content |
---
## ๐ Dataset Statistics
The SoMe benchmark includes comprehensive datasets for each task, with the following statistics:
| Task | # Query | # Data | Data Type |
|------|---------|--------|-----------|
| ๐จ Real-time Event Detection | 568 | 476,611 | Posts |
| ๐ Streaming Event Summary | 154 | 7,898,959 | Posts |
| ๐ซ Misinformation Detection | 1,451 | 27,137 | Posts & Knowledge |
| ๐ฏ User Behavior Prediction | 3,000 | 840,200 | Posts & Users |
| ๐ User Emotion Analysis | 2,696 | 840,200 | Posts & Users |
| ๐ฌ User Comment Simulation | 4,000 | 840,200 | Posts & Users |
| ๐ฑ Media Content Recommendation | 4,000 | 840,200 | Posts & Users |
| โ Social Media Question-Answering | 2,000 | 8,651,759 | Posts & Users |
| **Total** | **17,869** | **9,242,907** | **All** |
---
## ๐ Project Structure
```
Social-Media-Agent/
โโโ ๐ค agent.py # Main social media agent implementation
โโโ ๐ง qwen_agent/ # Qwen-Agent library
โโโ ๐ tasks/ # Task-specific modules
โ โโโ ๐ฑ media_content_recommend/
โ โโโ ๐ซ misinformation_detection/
โ โโโ ๐จ realtime_event_detection/
โ โโโ โ social_media_question_answering/
โ โโโ ๐ streaming_event_summary/
โ โโโ ๐ฌ user_comment_simulation/
โ โโโ ๐ user_emotion_analysis/
โ โโโ ๐ฏ user_behavior_prediction/
โโโ ๐ ๏ธ tools/ # Tools for social media analysis
โโโ ๐งช test_*.py # Test scripts for each task
โโโ ๐ eval_scripts/ # Evaluation scripts for scoring
โโโ ๐ results/ # Directory for storing results
โโโ ๐ datasets/ # Dataset directory
โโโ ๐พ database/ # Database directory
```
---
## ๐ Installation
### Prerequisites
- Python 3.12+ installed on your system
- Git installed for repository cloning
- Sufficient disk space for data (recommended: 50GB+)
### Installation Steps
1. **๐ฅ Clone the repository**
```bash
git clone https://github.com/LivXue/SoMe.git
cd SoMe
```
2. **๐ฆ Install dependencies**
```bash
pip install -r requirements.txt
```
3. **๐ฅ Download test data**
- Hugging Face Dataset: [Download Link](https://huggingface.co/datasets/LivXue/SoMe)
- Google Drive: [Download Link](https://drive.google.com/file/d/1sD2EaZStK5nODQWlJTHZ8WfFb5QHgwMN/view?usp=drive_link)
- Baidu Disk: [Download Link](https://pan.baidu.com/s/1DugTyLR5AaQHeOdXG6wqQQ?pwd=SoMe) (Password: SoMe)
After downloading, unzip the data into the `database` directory.
---
## ๐ป Usage
### ๐โโ๏ธ Running Individual Tasks
Each task can be evaluated using its corresponding test script:
```bash
# ๐จ Realtime Event Detection
python test_realtime_event_detection.py --model MODEL_NAME --base_url MODEL_SERVER_URL --api_key API_KEY
# ๐ Streaming Event Summary
python test_streaming_event_summary.py --model MODEL_NAME --base_url MODEL_SERVER_URL --api_key API_KEY
# ๐ซ Misinformation Detection
python test_misinformation_detection.py --model MODEL_NAME --base_url MODEL_SERVER_URL --api_key API_KEY
# ๐ฏ User Behavior Prediction
python test_user_behavior_prediction.py --model MODEL_NAME --base_url MODEL_SERVER_URL --api_key API_KEY
# ๐ User Emotion Analysis
python test_user_emotion_analysis.py --model MODEL_NAME --base_url MODEL_SERVER_URL --api_key API_KEY
# ๐ฌ User Comment Simulation
python test_user_comment_simulation.py --model MODEL_NAME --base_url MODEL_SERVER_URL --api_key API_KEY
# ๐ฑ Media Content Recommendation
python test_media_content_recommend.py --model MODEL_NAME --base_url MODEL_SERVER_URL --api_key API_KEY
# โ Social Media Question Answering
python test_social_media_question_answering.py --model MODEL_NAME --base_url MODEL_SERVER_URL --api_key API_KEY
```
### โ๏ธ Command Line Arguments
| Argument | Description | Example |
|----------|-------------|---------|
| `--model` | The model name to use | "deepseek-chat" |
| `--base_url` | The base URL for the model server | "https://api.deepseek.com" |
| `--api_key` | The API key for the model server | Your actual API key |
| `--output_path` | Output path for results | "results/my_experiment" |
### ๐ Evaluation
After running the test scripts, evaluate the results using the provided evaluation scripts:
```bash
# Option 1: For tasks with LLM-based answer extraction
python eval_scripts/[TASK]_extraction.py
python eval_scripts/[TASK]_compute_score.py
# Option 2: For tasks with LLM-as-judge scoring
python eval_scripts/[TASK]_scoring.py
python eval_scripts/[TASK]_compute_score.py
```
> **Note**: The LLM settings for evaluation are configured in `eval_scripts/settings.json`
---
## ๐ง Model Support
The benchmark supports various LLM models through OpenAI-compatible API endpoints:
- ๐งฉ **Qwen series models** (Qwen2.5, Qwen3, etc.)
- ๐ **OpenAI models** (GPT-4, GPT-5, etc.)
- ๐ **Third-party models** with OpenAI-compatible APIs (DeepSeek, Claude, etc.)
- ๐ฆ **Local models** served with OpenAI-compatible wrappers (vLLM, Ollama, etc.)
---
## ๐ Citation
If you use this benchmark in your research, please cite our paper:
```bibtex
@inproceedings{some2026,
title={SoMe: A Realistic Benchmark for LLM-based Social Media Agents},
author={Dizhan Xue and Jing Cui and Shengsheng Qian and Chuanrui Hu and Changsheng Xu},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
year={2026}
}
```
---
## ๐ค Contributing
We welcome contributions to improve the benchmark! Here's how you can help:
1. **๐ Report bugs** by opening issues with detailed descriptions
2. **๐ก Suggest features** for new tasks or improvements
3. **๐ง Submit code** via pull requests for bug fixes or enhancements
4. **๐ Add datasets** to expand the benchmark coverage
5. **๐ Improve documentation** for better usability
Please see our [Contributing Guidelines](CONTRIBUTING.md) for more details.
---
## ๐ License
This project is licensed under the Apache License 2.0 - see the [LICENSE](LICENSE) file for details.
---
## ๐ Acknowledgments
We would like to express our gratitude to:
- The **Qwen team** for their excellent Qwen-Agent framework, which forms the foundation of this benchmark
- All contributors who have helped develop and improve SoMe
- The social media platforms and data providers that make this research possible
- The AAAI 2026 reviewers for their valuable feedback
---
## ๐ Contact
For questions or inquiries about the benchmark, please contact:
- Dizhan Xue: xuedizhan17@mails.ucas.ac.cn
Visit our [GitHub repository](https://github.com/LivXue/SoMe) for the latest updates and discussions.