| # π€ SoMe: A Realistic Benchmark for LLM-based Social Media Agents | |
| <div align="center" style="line-height: 1.5;"> | |
| [](https://github.com/LivXue/SoMe) | |
| [](https://huggingface.co/datasets/LivXue/SoMe) | |
| [](https://arxiv.org/pdf/2512.14720) | |
| </div> | |
| --- | |
| ## π Overview | |
| SoMe is a comprehensive benchmark designed to evaluate the capabilities of Large Language Model (LLM)-based agents in realistic social media scenarios. This benchmark provides a standardized framework for testing and comparing social media agents across multiple dimensions of performance. | |
| SoMe comprises a diverse collection of: | |
| - **8 social media agent tasks** | |
| - **9,164,284 posts** from various social media platforms | |
| - **6,591 user profiles** with rich behavioral data | |
| - **25,686 reports** from external websites | |
| - **17,869 meticulously annotated task queries** | |
| --- | |
| ## π° News | |
| - **[2025.11]** π Our paper is accepted by AAAI 2026! | |
| --- | |
| ## β¨ Features | |
| SoMe benchmark evaluates social media agents across 8 key tasks, covering diverse aspects of social media intelligence: | |
| | Task Category | Task Name | Description | | |
| |---------------|-----------|-------------| | |
| | **Post-centered** | π¨ Realtime Event Detection (RED) | Identify and track emerging events in real-time | | |
| | **Post-centered** | π Streaming Event Summary (SES) | Summarize ongoing events from streaming data | | |
| | **Post-centered** | π« Misinformation Detection (MID) | Identify and flag potentially false or misleading information | | |
| | **User-centered** | π― User Behavior Prediction (UBP) | Predict user interactions with social media content | | |
| | **User-centered** | π User Emotion Analysis (UEA) | Analyze user emotions towards social media content | | |
| | **User-centered** | π¬ User Comment Simulation (UCS) | Simulate realistic user comments | | |
| | **Comprehensive** | π± Media Content Recommendation (MCR) | Recommend relevant media content based on user interests | | |
| | **Comprehensive** | β Social Media Question-Answering (SMQ) | Accurately answer questions about social media content | | |
| --- | |
| ## π Dataset Statistics | |
| The SoMe benchmark includes comprehensive datasets for each task, with the following statistics: | |
| | Task | # Query | # Data | Data Type | | |
| |------|---------|--------|-----------| | |
| | π¨ Real-time Event Detection | 568 | 476,611 | Posts | | |
| | π Streaming Event Summary | 154 | 7,898,959 | Posts | | |
| | π« Misinformation Detection | 1,451 | 27,137 | Posts & Knowledge | | |
| | π― User Behavior Prediction | 3,000 | 840,200 | Posts & Users | | |
| | π User Emotion Analysis | 2,696 | 840,200 | Posts & Users | | |
| | π¬ User Comment Simulation | 4,000 | 840,200 | Posts & Users | | |
| | π± Media Content Recommendation | 4,000 | 840,200 | Posts & Users | | |
| | β Social Media Question-Answering | 2,000 | 8,651,759 | Posts & Users | | |
| | **Total** | **17,869** | **9,242,907** | **All** | | |
| --- | |
| ## π Project Structure | |
| ``` | |
| Social-Media-Agent/ | |
| βββ π€ agent.py # Main social media agent implementation | |
| βββ π§ qwen_agent/ # Qwen-Agent library | |
| βββ π tasks/ # Task-specific modules | |
| β βββ π± media_content_recommend/ | |
| β βββ π« misinformation_detection/ | |
| β βββ π¨ realtime_event_detection/ | |
| β βββ β social_media_question_answering/ | |
| β βββ π streaming_event_summary/ | |
| β βββ π¬ user_comment_simulation/ | |
| β βββ π user_emotion_analysis/ | |
| β βββ π― user_behavior_prediction/ | |
| βββ π οΈ tools/ # Tools for social media analysis | |
| βββ π§ͺ test_*.py # Test scripts for each task | |
| βββ π eval_scripts/ # Evaluation scripts for scoring | |
| βββ π results/ # Directory for storing results | |
| βββ π datasets/ # Dataset directory | |
| βββ πΎ database/ # Database directory | |
| ``` | |
| --- | |
| ## π Installation | |
| ### Prerequisites | |
| - Python 3.12+ installed on your system | |
| - Git installed for repository cloning | |
| - Sufficient disk space for data (recommended: 50GB+) | |
| ### Installation Steps | |
| 1. **π₯ Clone the repository** | |
| ```bash | |
| git clone https://github.com/LivXue/SoMe.git | |
| cd SoMe | |
| ``` | |
| 2. **π¦ Install dependencies** | |
| ```bash | |
| pip install -r requirements.txt | |
| ``` | |
| 3. **π₯ Download test data** | |
| - Hugging Face Dataset: [Download Link](https://huggingface.co/datasets/LivXue/SoMe) | |
| - Google Drive: [Download Link](https://drive.google.com/file/d/1sD2EaZStK5nODQWlJTHZ8WfFb5QHgwMN/view?usp=drive_link) | |
| - Baidu Disk: [Download Link](https://pan.baidu.com/s/1DugTyLR5AaQHeOdXG6wqQQ?pwd=SoMe) (Password: SoMe) | |
| After downloading, unzip the data into the `database` directory. | |
| --- | |
| ## π» Usage | |
| ### πββοΈ Running Individual Tasks | |
| Each task can be evaluated using its corresponding test script: | |
| ```bash | |
| # π¨ Realtime Event Detection | |
| python test_realtime_event_detection.py --model MODEL_NAME --base_url MODEL_SERVER_URL --api_key API_KEY | |
| # π Streaming Event Summary | |
| python test_streaming_event_summary.py --model MODEL_NAME --base_url MODEL_SERVER_URL --api_key API_KEY | |
| # π« Misinformation Detection | |
| python test_misinformation_detection.py --model MODEL_NAME --base_url MODEL_SERVER_URL --api_key API_KEY | |
| # π― User Behavior Prediction | |
| python test_user_behavior_prediction.py --model MODEL_NAME --base_url MODEL_SERVER_URL --api_key API_KEY | |
| # π User Emotion Analysis | |
| python test_user_emotion_analysis.py --model MODEL_NAME --base_url MODEL_SERVER_URL --api_key API_KEY | |
| # π¬ User Comment Simulation | |
| python test_user_comment_simulation.py --model MODEL_NAME --base_url MODEL_SERVER_URL --api_key API_KEY | |
| # π± Media Content Recommendation | |
| python test_media_content_recommend.py --model MODEL_NAME --base_url MODEL_SERVER_URL --api_key API_KEY | |
| # β Social Media Question Answering | |
| python test_social_media_question_answering.py --model MODEL_NAME --base_url MODEL_SERVER_URL --api_key API_KEY | |
| ``` | |
| ### βοΈ Command Line Arguments | |
| | Argument | Description | Example | | |
| |----------|-------------|---------| | |
| | `--model` | The model name to use | "deepseek-chat" | | |
| | `--base_url` | The base URL for the model server | "https://api.deepseek.com" | | |
| | `--api_key` | The API key for the model server | Your actual API key | | |
| | `--output_path` | Output path for results | "results/my_experiment" | | |
| ### π Evaluation | |
| After running the test scripts, evaluate the results using the provided evaluation scripts: | |
| ```bash | |
| # Option 1: For tasks with LLM-based answer extraction | |
| python eval_scripts/[TASK]_extraction.py | |
| python eval_scripts/[TASK]_compute_score.py | |
| # Option 2: For tasks with LLM-as-judge scoring | |
| python eval_scripts/[TASK]_scoring.py | |
| python eval_scripts/[TASK]_compute_score.py | |
| ``` | |
| > **Note**: The LLM settings for evaluation are configured in `eval_scripts/settings.json` | |
| --- | |
| ## π§ Model Support | |
| The benchmark supports various LLM models through OpenAI-compatible API endpoints: | |
| - π§© **Qwen series models** (Qwen2.5, Qwen3, etc.) | |
| - π **OpenAI models** (GPT-4, GPT-5, etc.) | |
| - π **Third-party models** with OpenAI-compatible APIs (DeepSeek, Claude, etc.) | |
| - π¦ **Local models** served with OpenAI-compatible wrappers (vLLM, Ollama, etc.) | |
| --- | |
| ## π Citation | |
| If you use this benchmark in your research, please cite our paper: | |
| ```bibtex | |
| @inproceedings{some2026, | |
| title={SoMe: A Realistic Benchmark for LLM-based Social Media Agents}, | |
| author={Dizhan Xue and Jing Cui and Shengsheng Qian and Chuanrui Hu and Changsheng Xu}, | |
| booktitle={Proceedings of the AAAI Conference on Artificial Intelligence}, | |
| year={2026} | |
| } | |
| ``` | |
| --- | |
| ## π€ Contributing | |
| We welcome contributions to improve the benchmark! Here's how you can help: | |
| 1. **π Report bugs** by opening issues with detailed descriptions | |
| 2. **π‘ Suggest features** for new tasks or improvements | |
| 3. **π§ Submit code** via pull requests for bug fixes or enhancements | |
| 4. **π Add datasets** to expand the benchmark coverage | |
| 5. **π Improve documentation** for better usability | |
| Please see our [Contributing Guidelines](CONTRIBUTING.md) for more details. | |
| --- | |
| ## π License | |
| This project is licensed under the Apache License 2.0 - see the [LICENSE](LICENSE) file for details. | |
| --- | |
| ## π Acknowledgments | |
| We would like to express our gratitude to: | |
| - The **Qwen team** for their excellent Qwen-Agent framework, which forms the foundation of this benchmark | |
| - All contributors who have helped develop and improve SoMe | |
| - The social media platforms and data providers that make this research possible | |
| - The AAAI 2026 reviewers for their valuable feedback | |
| --- | |
| ## π Contact | |
| For questions or inquiries about the benchmark, please contact: | |
| - Dizhan Xue: xuedizhan17@mails.ucas.ac.cn | |
| Visit our [GitHub repository](https://github.com/LivXue/SoMe) for the latest updates and discussions. |