SoMe / README.md
LivXue's picture
Update README.md
9851ab0 verified

πŸ€– SoMe: A Realistic Benchmark for LLM-based Social Media Agents

GITHUB Dataset Paper


πŸ“‹ Overview

SoMe is a comprehensive benchmark designed to evaluate the capabilities of Large Language Model (LLM)-based agents in realistic social media scenarios. This benchmark provides a standardized framework for testing and comparing social media agents across multiple dimensions of performance.

SoMe comprises a diverse collection of:

  • 8 social media agent tasks
  • 9,164,284 posts from various social media platforms
  • 6,591 user profiles with rich behavioral data
  • 25,686 reports from external websites
  • 17,869 meticulously annotated task queries

πŸ“° News

  • [2025.11] πŸŽ‰ Our paper is accepted by AAAI 2026!

✨ Features

SoMe benchmark evaluates social media agents across 8 key tasks, covering diverse aspects of social media intelligence:

Task Category Task Name Description
Post-centered 🚨 Realtime Event Detection (RED) Identify and track emerging events in real-time
Post-centered πŸ“Š Streaming Event Summary (SES) Summarize ongoing events from streaming data
Post-centered 🚫 Misinformation Detection (MID) Identify and flag potentially false or misleading information
User-centered 🎯 User Behavior Prediction (UBP) Predict user interactions with social media content
User-centered 😊 User Emotion Analysis (UEA) Analyze user emotions towards social media content
User-centered πŸ’¬ User Comment Simulation (UCS) Simulate realistic user comments
Comprehensive πŸ“± Media Content Recommendation (MCR) Recommend relevant media content based on user interests
Comprehensive ❓ Social Media Question-Answering (SMQ) Accurately answer questions about social media content

πŸ“ˆ Dataset Statistics

The SoMe benchmark includes comprehensive datasets for each task, with the following statistics:

Task # Query # Data Data Type
🚨 Real-time Event Detection 568 476,611 Posts
πŸ“Š Streaming Event Summary 154 7,898,959 Posts
🚫 Misinformation Detection 1,451 27,137 Posts & Knowledge
🎯 User Behavior Prediction 3,000 840,200 Posts & Users
😊 User Emotion Analysis 2,696 840,200 Posts & Users
πŸ’¬ User Comment Simulation 4,000 840,200 Posts & Users
πŸ“± Media Content Recommendation 4,000 840,200 Posts & Users
❓ Social Media Question-Answering 2,000 8,651,759 Posts & Users
Total 17,869 9,242,907 All

πŸ“ Project Structure

Social-Media-Agent/
β”œβ”€β”€ πŸ€– agent.py                    # Main social media agent implementation
β”œβ”€β”€ πŸ”§ qwen_agent/                 # Qwen-Agent library
β”œβ”€β”€ πŸ“‹ tasks/                      # Task-specific modules
β”‚   β”œβ”€β”€ πŸ“± media_content_recommend/
β”‚   β”œβ”€β”€ 🚫 misinformation_detection/
β”‚   β”œβ”€β”€ 🚨 realtime_event_detection/
β”‚   β”œβ”€β”€ ❓ social_media_question_answering/
β”‚   β”œβ”€β”€ πŸ“Š streaming_event_summary/
β”‚   β”œβ”€β”€ πŸ’¬ user_comment_simulation/
β”‚   β”œβ”€β”€ 😊 user_emotion_analysis/
β”‚   └── 🎯 user_behavior_prediction/
β”œβ”€β”€ πŸ› οΈ tools/                      # Tools for social media analysis
β”œβ”€β”€ πŸ§ͺ test_*.py                   # Test scripts for each task
β”œβ”€β”€ πŸ“Š eval_scripts/               # Evaluation scripts for scoring
β”œβ”€β”€ πŸ“‚ results/                    # Directory for storing results
β”œβ”€β”€ πŸ“Š datasets/                   # Dataset directory
└── πŸ’Ύ database/                   # Database directory

πŸš€ Installation

Prerequisites

  • Python 3.12+ installed on your system
  • Git installed for repository cloning
  • Sufficient disk space for data (recommended: 50GB+)

Installation Steps

  1. πŸ“₯ Clone the repository

    git clone https://github.com/LivXue/SoMe.git
    cd SoMe
    
  2. πŸ“¦ Install dependencies

    pip install -r requirements.txt
    
  3. πŸ“₯ Download test data

    After downloading, unzip the data into the database directory.


πŸ’» Usage

πŸƒβ€β™‚οΈ Running Individual Tasks

Each task can be evaluated using its corresponding test script:

# 🚨 Realtime Event Detection
python test_realtime_event_detection.py --model MODEL_NAME --base_url MODEL_SERVER_URL --api_key API_KEY

# πŸ“Š Streaming Event Summary
python test_streaming_event_summary.py --model MODEL_NAME --base_url MODEL_SERVER_URL --api_key API_KEY

# 🚫 Misinformation Detection
python test_misinformation_detection.py --model MODEL_NAME --base_url MODEL_SERVER_URL --api_key API_KEY

# 🎯 User Behavior Prediction
python test_user_behavior_prediction.py --model MODEL_NAME --base_url MODEL_SERVER_URL --api_key API_KEY

# 😊 User Emotion Analysis
python test_user_emotion_analysis.py --model MODEL_NAME --base_url MODEL_SERVER_URL --api_key API_KEY

# πŸ’¬ User Comment Simulation
python test_user_comment_simulation.py --model MODEL_NAME --base_url MODEL_SERVER_URL --api_key API_KEY

# πŸ“± Media Content Recommendation
python test_media_content_recommend.py --model MODEL_NAME --base_url MODEL_SERVER_URL --api_key API_KEY

# ❓ Social Media Question Answering
python test_social_media_question_answering.py --model MODEL_NAME --base_url MODEL_SERVER_URL --api_key API_KEY

βš™οΈ Command Line Arguments

Argument Description Example
--model The model name to use "deepseek-chat"
--base_url The base URL for the model server "https://api.deepseek.com"
--api_key The API key for the model server Your actual API key
--output_path Output path for results "results/my_experiment"

πŸ“Š Evaluation

After running the test scripts, evaluate the results using the provided evaluation scripts:

# Option 1: For tasks with LLM-based answer extraction
python eval_scripts/[TASK]_extraction.py
python eval_scripts/[TASK]_compute_score.py

# Option 2: For tasks with LLM-as-judge scoring
python eval_scripts/[TASK]_scoring.py
python eval_scripts/[TASK]_compute_score.py

Note: The LLM settings for evaluation are configured in eval_scripts/settings.json


🧠 Model Support

The benchmark supports various LLM models through OpenAI-compatible API endpoints:

  • 🧩 Qwen series models (Qwen2.5, Qwen3, etc.)
  • πŸ”Œ OpenAI models (GPT-4, GPT-5, etc.)
  • 🌐 Third-party models with OpenAI-compatible APIs (DeepSeek, Claude, etc.)
  • πŸ“¦ Local models served with OpenAI-compatible wrappers (vLLM, Ollama, etc.)

πŸ“š Citation

If you use this benchmark in your research, please cite our paper:

@inproceedings{some2026,
  title={SoMe: A Realistic Benchmark for LLM-based Social Media Agents},
  author={Dizhan Xue and Jing Cui and Shengsheng Qian and Chuanrui Hu and Changsheng Xu},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  year={2026}
}

🀝 Contributing

We welcome contributions to improve the benchmark! Here's how you can help:

  1. πŸ› Report bugs by opening issues with detailed descriptions
  2. πŸ’‘ Suggest features for new tasks or improvements
  3. πŸ”§ Submit code via pull requests for bug fixes or enhancements
  4. πŸ“Š Add datasets to expand the benchmark coverage
  5. πŸ“ Improve documentation for better usability

Please see our Contributing Guidelines for more details.


πŸ“„ License

This project is licensed under the Apache License 2.0 - see the LICENSE file for details.


πŸ™ Acknowledgments

We would like to express our gratitude to:

  • The Qwen team for their excellent Qwen-Agent framework, which forms the foundation of this benchmark
  • All contributors who have helped develop and improve SoMe
  • The social media platforms and data providers that make this research possible
  • The AAAI 2026 reviewers for their valuable feedback

πŸ“ž Contact

For questions or inquiries about the benchmark, please contact:

Visit our GitHub repository for the latest updates and discussions.