nc-bench / README.md
sungeunan's picture
Update README.md
af164c5 verified
metadata
language: en
license: apache-2.0
tags:
  - conversational-competence
  - conversation-analysis
  - natural-conversation
  - multi-turn
  - benchmark
size: 720
task_categories:
  - question-answering
size_categories:
  - n<1K
dataset_info:
  features:
    - name: id
      dtype: int64
    - name: task
      dtype: string
    - name: chat_prompt
      list:
        - name: role
          dtype: string
        - name: content
          dtype: string
  splits:
    - name: basic
      num_bytes: 58382
      num_examples: 180
    - name: rag
      num_bytes: 240919
      num_examples: 180
    - name: complex_request
      num_bytes: 252336
      num_examples: 360
  download_size: 62045
  dataset_size: 551637
configs:
  - config_name: default
    data_files:
      - split: basic
        path: data/basic-*
      - split: rag
        path: data/rag-*
      - split: complex_request
        path: data/complex_request-*

Dataset Card for Natural Conversation Benchmark (NC-Bench)

Dataset Summary

The Natural Conversation Benchmarks (NC-Bench) aim to answer the question: How well can generative AI converse like humans do? In other words, the benchmarks begin to measure the general conversational competence of large language models (LLMs). They do this by testing models' ability to generate an appropriate type of conversational action, or dialogue act, in response to a particular sequence of actions. The sequences of conversational actions, or patterns, are adapted from conversation science, specifically the model of sequence organization in the field of conversation analysis (Schegloff, 2007) and the pattern library of IBM Natural Conversation Framework (book). Models are tested by generating the next line in a transcript. NC-Bench is a lightweight method that is easily extensible to more conversation patterns.

The dataset consists of 720 samples:

  • Basic (180): Patterns that capture basic practices of sequence management: answering inquiries, repairing answers, and closing pair sequences. This set uses ordinary conversational use cases and does NOT include passages for retrieval augmented generation (RAG). (See results above).
  • RAG (180): Sequence management patterns from Basic but with the inclusion of a passage of information for RAG. Determining the faithfulness of the models' responses to the passage are not a primary goal. Instead, the goal is to determine if the model can maintain the conversation pattern in the face of a document context, which contains a competing language style and format. The uses cases in this set involve information giving using Wikipedia as a source.
  • Complex Request (360): Sequence management patterns involving complex requests. Such requests require the agent to elicit details from the fictional user (for example, slot filling). Other patterns involve preliminaries to the inquiry-answer pair (i.e., pre-expansions). These use cases are business related.
Set Pattern Count Pattern Count
Basic Inquiry 20 Paraphrase Request 20
Incremental Request 20 Repeat Request 20
Self-Correction 20 Example Request 20
Definition Request 20 Sequence Closer 20
Sequence Abort 20 Overview 180
RAG Inquiry 20 Paraphrase Request 20
Inquiry (Ungrounded) 20 Repeat Request 20
Incremental/Correction 20 Example Request 20
Definition Request 20 Sequence Closer 20
Sequence Abort 20 Overview 180
Complex Request Preliminary 40 Paraphrase Request 20
Recommendation 60 Repeat Request 40
Detail Request 60 Example Request 20
Expansion 40 Sequence Closer 20
Self-Correction 20 Sequence Abort 20
Definition Request 20 Overview 360

Intended Uses

  • Benchmarking: An expert-annotated benchmark for evaluating the general conversational competence of large language models (LLMs)

Limitations

  • Size: With <1K samples, the dataset is best suited for evaluation, not large-scale training.

Leaderboard

Evaluation Code & Criteria.

The full evaluation pipeline, judge setup, and per-certierion scoring logic, is available on GitHub.

  • Qwen2.5-3B-Inst achieves the highest conversation competence on the Basic set with 82.22%.
  • granite-3.3-8b-inst performs best on the RAG set with 77.77%.
  • granite-3.3-2b-inst performs best on the Complex Request set with 80.15% accuracy.
Model Basic (%) RAG (%) Complex Request (%)
granite-3.3-2b-inst 72.22 76.11 80.15
granite-3.3-8b-inst 76.11 77.77 77.04
Llama-3.2-3B-Inst 66.66 60.00 67.80
Llama-3.1-8B-Inst 68.88 68.88 71.06
Qwen2.5-3B-Inst 82.22 75.55 62.19
Qwen2.5-7B-Inst 80.55 73.88 76.06

Reference

Emanuel A. Schegloff, Sequence Organization in Interaction: A Primer in Conversation Analysis (Cambridge University Press, 2007)

How to cite this work

If you use this dataset in your research, please cite it as follows:

BibTeX

@article{moore2026nc,
  title={NC-Bench: An LLM Benchmark for Evaluating Conversational Competence},
  author={Moore, Robert J and An, Sungeun and Ahmed, Farhan and Gala, Jay Pankaj},
  journal={arXiv preprint arXiv:2601.06426},
  year={2026}
}