Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ArrowInvalid
Message:      JSON parse error: Column() changed from object to string in row 0
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 183, in _generate_tables
                  df = pandas_read_json(f)
                       ^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 38, in pandas_read_json
                  return pd.read_json(path_or_buf, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 815, in read_json
                  return json_reader.read()
                         ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 1014, in read
                  obj = self._get_object_parser(self.data)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 1040, in _get_object_parser
                  obj = FrameParser(json, **kwargs).parse()
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 1176, in parse
                  self._parse()
                File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 1391, in _parse
                  self.obj = DataFrame(
                             ^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/core/frame.py", line 778, in __init__
                  mgr = dict_to_mgr(data, index, columns, dtype=dtype, copy=copy, typ=manager)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/core/internals/construction.py", line 503, in dict_to_mgr
                  return arrays_to_mgr(arrays, columns, index, dtype=dtype, typ=typ, consolidate=copy)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/core/internals/construction.py", line 114, in arrays_to_mgr
                  index = _extract_index(arrays)
                          ^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/core/internals/construction.py", line 680, in _extract_index
                  raise ValueError(
              ValueError: Mixing dicts with non-Series may lead to ambiguous ordering.
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 243, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 3608, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2368, in _head
                  return next(iter(self.iter(batch_size=n)))
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2573, in iter
                  for key, example in iterator:
                                      ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2060, in __iter__
                  for key, pa_table in self._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2082, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 544, in _iter_arrow
                  for key, pa_table in iterator:
                                       ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 383, in _iter_arrow
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 186, in _generate_tables
                  raise e
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 160, in _generate_tables
                  pa_table = paj.read_json(
                             ^^^^^^^^^^^^^^
                File "pyarrow/_json.pyx", line 342, in pyarrow._json.read_json
                File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: JSON parse error: Column() changed from object to string in row 0

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

YAML Metadata Warning: empty or missing yaml metadata in repo card

Check out the documentation for more information.

Multi-Domain Support Conversations Dataset

This dataset combines 600 realistic customer support conversations from six different domains:

  • Customer Service (100 conversations)
  • E-commerce (100 conversations)
  • Financial Support (100 conversations)
  • HR / Onboarding (100 conversations)
  • Medical Helpdesk (100 conversations)
  • Technical Support (100 conversations)

Each dialogue is exactly 11 turns long and covers a wide variety of issues, making it ideal for training general-purpose AI assistants, chatbots, and customer support models.

Dataset Structure

Each conversation is stored as a JSON object with the following fields:

Field Type Description
id integer Unique identifier (1–600, grouped by domain)
domain string Original domain (e.g., "customer service", "e-commerce")
problem string Short description of the customer's issue
customer_type string Customer persona (see full list below)
dialogue array List of alternating customer/agent messages
dialogue[].role string "customer" or "agent"
dialogue[].text string The message content
resolution string One-sentence summary of how the issue was resolved

Customer Personas (with descriptions)

The dataset includes six distinct customer types, evenly distributed across each domain in a fixed cycle (conversations 1–6 contain one of each type, and the pattern repeats every 6 conversations up to 100):

  • frustrated user – Emotionally charged, expects fast resolution, may be abrupt.
  • confused beginner – Low technical literacy, needs step-by-step guidance, asks basic questions.
  • impatient executive – Busy professional, direct, demands efficiency and authoritative responses.
  • elderly user – Older, patient, requires clear explanations and warm treatment.
  • tech-savvy user – High technical knowledge, uses precise terminology, wants exact details.
  • calm and patient user – Collaborative, easy-going, follows instructions without friction.

This balanced distribution ensures your model learns to handle a wide range of communication styles and needs.

Example Conversation

Below is a sample conversation from the customer service domain (customer type: impatient executive):

{
  "id": 1,
  "domain": "customer service",
  "problem": "order never arrived",
  "customer_type": "impatient executive",
  "dialogue": [
    {
      "role": "customer",
      "text": "I placed an order 10 days ago and it still hasn't arrived. This is completely unacceptable."
    },
    {
      "role": "agent",
      "text": "I sincerely apologize for this delay. I'll look into this immediately. Could you provide your order number?"
    },
    {
      "role": "customer",
      "text": "Order number is #ORD-88421. I need this resolved today."
    },
    {
      "role": "agent",
      "text": "Thank you. I've pulled up your order. I can see it was delayed at the courier depot. I'm escalating this as high priority right now."
    },
    {
      "role": "customer",
      "text": "How long will an escalation actually take? I've heard that before."
    },
    {
      "role": "agent",
      "text": "I completely understand your frustration. I'm contacting the courier directly and will have a confirmed delivery window within 2 hours. I'll also apply a full shipping refund."
    },
    {
      "role": "customer",
      "text": "Alright. A 2-hour update is acceptable. What's the reference for this case?"
    },
    {
      "role": "agent",
      "text": "Your case reference is TKT-20394. I've also sent you an email confirmation with the courier's direct contact and your shipping refund confirmation."
    },
    {
      "role": "customer",
      "text": "I received the email. The refund is noted. I expect the delivery window shortly."
    },
    {
      "role": "agent",
      "text": "Absolutely. You'll receive an SMS from the courier within 90 minutes. Is there anything else I can assist with?"
    },
    {
      "role": "customer",
      "text": "No. Just make sure it's delivered. Thank you."
    }
  ],
  "resolution": "Order traced to courier depot delay. Case escalated with priority, shipping refund applied, and delivery window confirmed within 2 hours."
}

How to Use
Load with Python (local JSON)
python

import json

with open("multi_domain_support.json", "r", encoding="utf-8") as f:
    data = json.load(f)
    conversations = data["conversations"]

# Example: print all problem descriptions
for conv in conversations:
    print(conv["domain"], conv["problem"])

Load with Hugging Face Datasets
python

from datasets import load_dataset

dataset = load_dataset("ai-training-datasets/multi-domain-support", split="train")
print(dataset[0])

License

This dataset is released under the MIT License. You are free to use, modify, and distribute it for both research and commercial purposes.
Citation

If you use this dataset in your work, please cite it as:
text

@dataset{ai_training_datasets_2026,
  title     = {AI Training Datasets: Multi-Domain Support Conversations},
  author    = {AI Training Datasets},
  year      = {2026},
  version   = {1.0},
  publisher = {Hugging Face}
}

## 💼 Commercial licensing

This dataset is available for free under the MIT License for non-commercial use.  
If you need a commercial license, custom datasets, or the full version with all 600 conversations, contact me at:

📧 cybernovasg@gmail.com
Downloads last month
10