# ๐Ÿค– SoMe: A Realistic Benchmark for LLM-based Social Media Agents
[![GITHUB](https://img.shields.io/badge/Github-24292F?style=for-the-badge&logo=github&logoColor=white)](https://github.com/LivXue/SoMe) [![Dataset](https://img.shields.io/badge/Dataset-yellow?style=for-the-badge&logo=huggingface&logoColor=white)](https://huggingface.co/datasets/LivXue/SoMe) [![Paper](https://img.shields.io/badge/Paper-red?style=for-the-badge&logo=arxiv&logoColor=white)](https://arxiv.org/pdf/2512.14720)
--- ## ๐Ÿ“‹ Overview SoMe is a comprehensive benchmark designed to evaluate the capabilities of Large Language Model (LLM)-based agents in realistic social media scenarios. This benchmark provides a standardized framework for testing and comparing social media agents across multiple dimensions of performance. SoMe comprises a diverse collection of: - **8 social media agent tasks** - **9,164,284 posts** from various social media platforms - **6,591 user profiles** with rich behavioral data - **25,686 reports** from external websites - **17,869 meticulously annotated task queries** --- ## ๐Ÿ“ฐ News - **[2025.11]** ๐ŸŽ‰ Our paper is accepted by AAAI 2026! --- ## โœจ Features SoMe benchmark evaluates social media agents across 8 key tasks, covering diverse aspects of social media intelligence: | Task Category | Task Name | Description | |---------------|-----------|-------------| | **Post-centered** | ๐Ÿšจ Realtime Event Detection (RED) | Identify and track emerging events in real-time | | **Post-centered** | ๐Ÿ“Š Streaming Event Summary (SES) | Summarize ongoing events from streaming data | | **Post-centered** | ๐Ÿšซ Misinformation Detection (MID) | Identify and flag potentially false or misleading information | | **User-centered** | ๐ŸŽฏ User Behavior Prediction (UBP) | Predict user interactions with social media content | | **User-centered** | ๐Ÿ˜Š User Emotion Analysis (UEA) | Analyze user emotions towards social media content | | **User-centered** | ๐Ÿ’ฌ User Comment Simulation (UCS) | Simulate realistic user comments | | **Comprehensive** | ๐Ÿ“ฑ Media Content Recommendation (MCR) | Recommend relevant media content based on user interests | | **Comprehensive** | โ“ Social Media Question-Answering (SMQ) | Accurately answer questions about social media content | --- ## ๐Ÿ“ˆ Dataset Statistics The SoMe benchmark includes comprehensive datasets for each task, with the following statistics: | Task | # Query | # Data | Data Type | |------|---------|--------|-----------| | ๐Ÿšจ Real-time Event Detection | 568 | 476,611 | Posts | | ๐Ÿ“Š Streaming Event Summary | 154 | 7,898,959 | Posts | | ๐Ÿšซ Misinformation Detection | 1,451 | 27,137 | Posts & Knowledge | | ๐ŸŽฏ User Behavior Prediction | 3,000 | 840,200 | Posts & Users | | ๐Ÿ˜Š User Emotion Analysis | 2,696 | 840,200 | Posts & Users | | ๐Ÿ’ฌ User Comment Simulation | 4,000 | 840,200 | Posts & Users | | ๐Ÿ“ฑ Media Content Recommendation | 4,000 | 840,200 | Posts & Users | | โ“ Social Media Question-Answering | 2,000 | 8,651,759 | Posts & Users | | **Total** | **17,869** | **9,242,907** | **All** | --- ## ๐Ÿ“ Project Structure ``` Social-Media-Agent/ โ”œโ”€โ”€ ๐Ÿค– agent.py # Main social media agent implementation โ”œโ”€โ”€ ๐Ÿ”ง qwen_agent/ # Qwen-Agent library โ”œโ”€โ”€ ๐Ÿ“‹ tasks/ # Task-specific modules โ”‚ โ”œโ”€โ”€ ๐Ÿ“ฑ media_content_recommend/ โ”‚ โ”œโ”€โ”€ ๐Ÿšซ misinformation_detection/ โ”‚ โ”œโ”€โ”€ ๐Ÿšจ realtime_event_detection/ โ”‚ โ”œโ”€โ”€ โ“ social_media_question_answering/ โ”‚ โ”œโ”€โ”€ ๐Ÿ“Š streaming_event_summary/ โ”‚ โ”œโ”€โ”€ ๐Ÿ’ฌ user_comment_simulation/ โ”‚ โ”œโ”€โ”€ ๐Ÿ˜Š user_emotion_analysis/ โ”‚ โ””โ”€โ”€ ๐ŸŽฏ user_behavior_prediction/ โ”œโ”€โ”€ ๐Ÿ› ๏ธ tools/ # Tools for social media analysis โ”œโ”€โ”€ ๐Ÿงช test_*.py # Test scripts for each task โ”œโ”€โ”€ ๐Ÿ“Š eval_scripts/ # Evaluation scripts for scoring โ”œโ”€โ”€ ๐Ÿ“‚ results/ # Directory for storing results โ”œโ”€โ”€ ๐Ÿ“Š datasets/ # Dataset directory โ””โ”€โ”€ ๐Ÿ’พ database/ # Database directory ``` --- ## ๐Ÿš€ Installation ### Prerequisites - Python 3.12+ installed on your system - Git installed for repository cloning - Sufficient disk space for data (recommended: 50GB+) ### Installation Steps 1. **๐Ÿ“ฅ Clone the repository** ```bash git clone https://github.com/LivXue/SoMe.git cd SoMe ``` 2. **๐Ÿ“ฆ Install dependencies** ```bash pip install -r requirements.txt ``` 3. **๐Ÿ“ฅ Download test data** - Hugging Face Dataset: [Download Link](https://huggingface.co/datasets/LivXue/SoMe) - Google Drive: [Download Link](https://drive.google.com/file/d/1sD2EaZStK5nODQWlJTHZ8WfFb5QHgwMN/view?usp=drive_link) - Baidu Disk: [Download Link](https://pan.baidu.com/s/1DugTyLR5AaQHeOdXG6wqQQ?pwd=SoMe) (Password: SoMe) After downloading, unzip the data into the `database` directory. --- ## ๐Ÿ’ป Usage ### ๐Ÿƒโ€โ™‚๏ธ Running Individual Tasks Each task can be evaluated using its corresponding test script: ```bash # ๐Ÿšจ Realtime Event Detection python test_realtime_event_detection.py --model MODEL_NAME --base_url MODEL_SERVER_URL --api_key API_KEY # ๐Ÿ“Š Streaming Event Summary python test_streaming_event_summary.py --model MODEL_NAME --base_url MODEL_SERVER_URL --api_key API_KEY # ๐Ÿšซ Misinformation Detection python test_misinformation_detection.py --model MODEL_NAME --base_url MODEL_SERVER_URL --api_key API_KEY # ๐ŸŽฏ User Behavior Prediction python test_user_behavior_prediction.py --model MODEL_NAME --base_url MODEL_SERVER_URL --api_key API_KEY # ๐Ÿ˜Š User Emotion Analysis python test_user_emotion_analysis.py --model MODEL_NAME --base_url MODEL_SERVER_URL --api_key API_KEY # ๐Ÿ’ฌ User Comment Simulation python test_user_comment_simulation.py --model MODEL_NAME --base_url MODEL_SERVER_URL --api_key API_KEY # ๐Ÿ“ฑ Media Content Recommendation python test_media_content_recommend.py --model MODEL_NAME --base_url MODEL_SERVER_URL --api_key API_KEY # โ“ Social Media Question Answering python test_social_media_question_answering.py --model MODEL_NAME --base_url MODEL_SERVER_URL --api_key API_KEY ``` ### โš™๏ธ Command Line Arguments | Argument | Description | Example | |----------|-------------|---------| | `--model` | The model name to use | "deepseek-chat" | | `--base_url` | The base URL for the model server | "https://api.deepseek.com" | | `--api_key` | The API key for the model server | Your actual API key | | `--output_path` | Output path for results | "results/my_experiment" | ### ๐Ÿ“Š Evaluation After running the test scripts, evaluate the results using the provided evaluation scripts: ```bash # Option 1: For tasks with LLM-based answer extraction python eval_scripts/[TASK]_extraction.py python eval_scripts/[TASK]_compute_score.py # Option 2: For tasks with LLM-as-judge scoring python eval_scripts/[TASK]_scoring.py python eval_scripts/[TASK]_compute_score.py ``` > **Note**: The LLM settings for evaluation are configured in `eval_scripts/settings.json` --- ## ๐Ÿง  Model Support The benchmark supports various LLM models through OpenAI-compatible API endpoints: - ๐Ÿงฉ **Qwen series models** (Qwen2.5, Qwen3, etc.) - ๐Ÿ”Œ **OpenAI models** (GPT-4, GPT-5, etc.) - ๐ŸŒ **Third-party models** with OpenAI-compatible APIs (DeepSeek, Claude, etc.) - ๐Ÿ“ฆ **Local models** served with OpenAI-compatible wrappers (vLLM, Ollama, etc.) --- ## ๐Ÿ“š Citation If you use this benchmark in your research, please cite our paper: ```bibtex @inproceedings{some2026, title={SoMe: A Realistic Benchmark for LLM-based Social Media Agents}, author={Dizhan Xue and Jing Cui and Shengsheng Qian and Chuanrui Hu and Changsheng Xu}, booktitle={Proceedings of the AAAI Conference on Artificial Intelligence}, year={2026} } ``` --- ## ๐Ÿค Contributing We welcome contributions to improve the benchmark! Here's how you can help: 1. **๐Ÿ› Report bugs** by opening issues with detailed descriptions 2. **๐Ÿ’ก Suggest features** for new tasks or improvements 3. **๐Ÿ”ง Submit code** via pull requests for bug fixes or enhancements 4. **๐Ÿ“Š Add datasets** to expand the benchmark coverage 5. **๐Ÿ“ Improve documentation** for better usability Please see our [Contributing Guidelines](CONTRIBUTING.md) for more details. --- ## ๐Ÿ“„ License This project is licensed under the Apache License 2.0 - see the [LICENSE](LICENSE) file for details. --- ## ๐Ÿ™ Acknowledgments We would like to express our gratitude to: - The **Qwen team** for their excellent Qwen-Agent framework, which forms the foundation of this benchmark - All contributors who have helped develop and improve SoMe - The social media platforms and data providers that make this research possible - The AAAI 2026 reviewers for their valuable feedback --- ## ๐Ÿ“ž Contact For questions or inquiries about the benchmark, please contact: - Dizhan Xue: xuedizhan17@mails.ucas.ac.cn Visit our [GitHub repository](https://github.com/LivXue/SoMe) for the latest updates and discussions.