Dataset Viewer
The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code:   ConfigNamesError
Exception:    ValueError
Message:      Feature type 'File' not found. Available feature types: ['Value', 'ClassLabel', 'Translation', 'TranslationVariableLanguages', 'LargeList', 'List', 'Array2D', 'Array3D', 'Array4D', 'Array5D', 'Audio', 'Image', 'Video', 'Pdf']
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
                  config_names = get_dataset_config_names(
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names
                  dataset_module = dataset_module_factory(
                                   ^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1031, in dataset_module_factory
                  raise e1 from None
                File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1004, in dataset_module_factory
                  ).get_module()
                    ^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 605, in get_module
                  dataset_infos = DatasetInfosDict.from_dataset_card_data(dataset_card_data)
                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/info.py", line 386, in from_dataset_card_data
                  dataset_info = DatasetInfo._from_yaml_dict(dataset_card_data["dataset_info"])
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/info.py", line 317, in _from_yaml_dict
                  yaml_data["features"] = Features._from_yaml_list(yaml_data["features"])
                                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 2031, in _from_yaml_list
                  return cls.from_dict(from_yaml_inner(yaml_data))
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 1876, in from_dict
                  obj = generate_from_dict(dic)
                        ^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 1463, in generate_from_dict
                  return {key: generate_from_dict(value) for key, value in obj.items()}
                               ^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 1469, in generate_from_dict
                  raise ValueError(f"Feature type '{_type}' not found. Available feature types: {list(_FEATURE_TYPES.keys())}")
              ValueError: Feature type 'File' not found. Available feature types: ['Value', 'ClassLabel', 'Translation', 'TranslationVariableLanguages', 'LargeList', 'List', 'Array2D', 'Array3D', 'Array4D', 'Array5D', 'Audio', 'Image', 'Video', 'Pdf']

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

MCA^2 Data & Embeddings

Paper | GitHub

This repository provides the raw data (data/) and the corresponding precomputed multi-view embeddings (embeddings/) for MCA^2, a two-stage multi-view text anomaly detection (TAD) framework.

MCA^2 exploits embeddings from multiple pretrained language models (views) and integrates them via a multi-view reconstruction model, contrastive collaboration, and adaptive allocation to identify anomalies. This dataset release facilitates reproduction by providing pre-extracted vectors, avoiding the need for expensive re-computation across various encoders (e.g., BERT, Stella, Qwen, and OpenAI).

Content

  • data/: Dataset files including train/test splits (e.g., .npz and .jsonl files).
  • embeddings/: Pre-extracted vectors grouped by dataset and split. Multiple embedding files correspond to different "views" or encoders.

Sample Usage

To reproduce the results for a specific dataset (such as OLID) using the MCA^2 framework, you can follow the instructions from the official repository:

# 1. Setup environment
conda create -n MCA2 python=3.9
conda activate MCA2
pip install torch sentence-transformers numpy transformers scikit-learn pandas tqdm pyod accelerate

# 2. Clone the repository and navigate to the evaluation directory
git clone https://github.com/yankehan/MCA2
cd MCA2/multiview_two_stage/eval

# 3. Run the evaluation script (ensure data and embeddings are placed in the project directory)
python ourmethod_eval.py --dataset olid --seeds 41,42,43,44,45

Notes

  • Embeddings can be large; it is recommended to start with a smaller dataset like TAD-OLID first.
  • If downloads are slow, you may try using a Hugging Face mirror (e.g., https://hf-mirror.com).

Citation

If you use this dataset or the MCA^2 framework in your research, please cite:

@article{liu2026beyond,
  title={Beyond a Single Perspective: Text Anomaly Detection with Multi-View Language Representations},
  author={Yixin Liu, Kehan Yan, Shiyuan Li and others},
  journal={arXiv preprint arXiv:2601.17786},
  year={2026}
}

License

This dataset is released under the MIT License.

Downloads last month
210

Paper for ZhaXinke/MCA2