| | --- |
| | license: mit |
| | task_categories: |
| | - question-answering |
| | - other |
| | language: |
| | - en |
| | tags: |
| | - human-ai-interaction |
| | - model-evaluation |
| | - preference-learning |
| | - conversational-ai |
| | pretty_name: HUMAINE Human-AI Interaction Evaluation Dataset |
| | size_categories: |
| | - 100K<n<1M |
| | configs: |
| | - config_name: feedback_comparisons |
| | data_files: feedback_dataset.parquet |
| | default: true |
| | - config_name: conversations_metadata |
| | data_files: conversations_metadata_dataset.parquet |
| | --- |
| | |
| | # HUMAINE: Human-AI Interaction Evaluation Dataset |
| |
|
| | ## Dataset Description |
| |
|
| | ### Dataset Summary |
| |
|
| | The HUMAINE dataset contains human evaluations of AI model interactions across diverse demographic groups and conversation contexts. This dataset powers the [HUMAINE Leaderboard](https://huggingface.co/spaces/ProlificAI/humaine-leaderboard), providing insights into how different AI models perform across various user populations and use cases. |
| |
|
| | The dataset consists of two main components: |
| | - **Feedback Comparisons**: Pairwise model comparisons across multiple evaluation metrics |
| | - **Conversations Metadata**: Conversations with task complexity, achievement, and engagement scores |
| |
|
| | **Note**: There may be a slight discrepancy between the numbers in this dataset and the leaderboard app due to changes in consent related to data release and the post-processing steps involved in preparing this dataset. |
| |
|
| | ### Supported Tasks |
| |
|
| | - Model performance evaluation |
| | - Demographic bias analysis |
| | - Preference learning |
| | - Human-AI interaction research |
| | - Conversational AI benchmarking |
| |
|
| | ## Dataset Structure |
| |
|
| | ### Data Files |
| |
|
| | The dataset contains two CSV files: |
| |
|
| | 1. **`feedback_dataset.csv`** |
| | - Pairwise comparisons between different AI models |
| | - Includes demographic information and preference choices |
| | |
| | 2. **`conversations_metadata_dataset.csv`** |
| | - Metadata about individual conversations between users and AI models |
| | - Includes task types, domains, and performance scores |
| | |
| | ### Data Fields |
| | |
| | #### Feedback Comparisons |
| | - `conversation_id`: Unique identifier linking to conversation metadata |
| | - `model_a`: First model in the comparison |
| | - `model_b`: Second model in the comparison |
| | - `metric`: Evaluation metric (overall_winner, trust_ethics_and_safety, core_task_performance_and_reasoning, interaction_fluidity_and_adaptiveness) |
| | - `choice`: User's choice (A, B, or tie) |
| | - `age`: Age of the evaluator |
| | - `ethnic_group`: Ethnic group of the evaluator |
| | - `political_affilation`: Political affiliation of the evaluator |
| | - `country_of_residence`: Country of residence of the evaluator |
| | |
| | #### Conversations Metadata |
| | - `conversation_id`: Unique identifier for the conversation |
| | - `model_name`: Name of the AI model used |
| | - `task_type`: Type of task (information_seeking, technical_assistance, etc.) |
| | - `domain`: Domain of the conversation (health_medical, technology, travel, etc.) |
| | - `task_complexity_score`: Complexity rating (1-5) |
| | - `goal_achievement_score`: How well the goal was achieved (1-5) |
| | - `user_engagement_score`: User engagement level (1-5) |
| | - `total_messages`: Total number of messages in the conversation |
| | |
| | ## Usage |
| | |
| | This dataset contains two CSV files that can be joined on the `conversation_id` field: |
| | - `feedback_dataset.csv`: Pairwise model comparisons with demographic information (primary dataset) |
| | - `conversations_metadata_dataset.csv`: Metadata about each conversation |
| | |
| | Both files are included in this single dataset repository and can be accessed using HuggingFace's dataset loading utilities. |
| | |
| | ## Dataset Creation |
| | |
| | ### Curation Rationale |
| | |
| | This dataset was created to address the lack of diverse, demographically-aware evaluation data for AI models. It captures real-world human preferences and interactions across different population groups, enabling more inclusive AI development. |
| | |
| | ### Source Data |
| | |
| | Data was collected through structured human evaluation tasks where participants: |
| | 1. Engaged in conversations with various AI models |
| | 2. Provided pairwise comparisons between model outputs |
| | 3. Rated conversations on multiple quality dimensions (metrics) |
| | |
| | ### Annotations |
| | |
| | All annotations were provided by human evaluators through the Prolific platform, ensuring demographic diversity and high-quality feedback. |
| | |
| | ### Personal and Sensitive Information |
| | |
| | The dataset contains aggregated demographic information (age groups, ethnic groups, political affiliations, countries) but no personally identifiable information. All data has been anonymized and aggregated to protect participant privacy. |
| | |
| | ## Considerations for Using the Data |
| | |
| | ### Social Impact |
| | |
| | This dataset aims to promote more inclusive AI development by highlighting performance differences across demographic groups. It should be used to improve AI systems' fairness and effectiveness for all users. |
| | |
| | ### Discussion of Biases |
| | |
| | While efforts were made to ensure demographic diversity, the dataset may still contain biases related to: |
| | - Geographic representation (primarily US and UK participants) |
| | - Self-selection bias in participant recruitment |
| | - Cultural and linguistic factors affecting evaluation criteria |
| | |
| | ### Other Known Limitations |
| | |
| | - Limited to English-language interactions |
| | - Demographic categories are self-reported |
| | - Temporal bias (models evaluated at specific points in time) |
| | |
| | ## Additional Information |
| | |
| | ### Dataset Curators |
| | |
| | This dataset was curated by the Prolific AI team as part of the HUMAINE (Human-AI Interaction Evaluation) project. |
| | |
| | ### Licensing Information |
| | |
| | This dataset is released under the MIT License. |
| | |
| | ### Citation Information |
| | |
| | ```bibtex |
| | @dataset{humaine2025, |
| | title={HUMAINE: Human-AI Interaction Evaluation Dataset}, |
| | author={Prolific AI Team}, |
| | year={2025}, |
| | publisher={Hugging Face}, |
| | url={https://huggingface.co/datasets/ProlificAI/humaine-evaluation-dataset} |
| | } |
| | ``` |
| | |
| | ### Contributions |
| | |
| | Thanks to all the human evaluators who contributed their feedback to this project! |
| | |
| | ## Contact |
| | |
| | For questions or feedback about this dataset, please visit the [HUMAINE Leaderboard](https://huggingface.co/spaces/ProlificAI/humaine-leaderboard) or contact the Prolific AI team. |