nielsr HF Staff commited on
Commit
3e280b7
·
verified ·
1 Parent(s): 413e4a5

Link paper and GitHub repository, and add sample usage

Browse files

Hi! I'm Niels from the Hugging Face community science team.

This pull request improves the dataset card for the MCA^2 dataset. Key changes include:
- Added the `arxiv: 2601.17786` tag to the metadata to link the dataset to its corresponding research paper.
- Included links to the official paper and GitHub repository.
- Added a "Sample Usage" section with code snippets derived from the GitHub README to help users reproduce the results.
- Provided a BibTeX citation for the paper.

Files changed (1) hide show
  1. README.md +51 -26
README.md CHANGED
@@ -1,51 +1,76 @@
1
  ---
2
- license: mit
3
  language:
4
- - en
5
- tags:
6
- - anomaly-detection
7
- - multi-view
8
- - embeddings
9
- - representation-learning
10
- - contrastive-learning
11
- task_categories:
12
- - text-classification
13
  size_categories:
14
- - 1K<n<10K
15
- pretty_name: "MCA^2 Data & Embeddings"
 
 
 
 
 
 
 
 
 
16
  dataset_info:
17
  features:
18
- - name: data
19
- dtype: "file"
20
- - name: embeddings
21
- dtype: "file"
22
  ---
23
 
24
  # MCA^2 Data & Embeddings
25
 
26
- This repository provides the **raw data (`data/`)** and the corresponding **precomputed multi-view embeddings (`embeddings/`)** for **MCA^2**, a **two-stage multi-view anomaly detection** framework.
27
 
28
- MCA^2 is not an end-to-end pipeline: it first generates embeddings for the same sample from multiple "views" (e.g., different encoders / feature sources) and stores them offline, then trains/evaluates the anomaly detector on top of these embeddings. This dataset release is intended to make reproduction easier and avoid re-computing expensive embeddings (especially those requiring paid APIs or heavy inference).
 
 
29
 
30
  ## Content
31
 
32
- - **data/**: dataset files (e.g., train/test splits)
33
- - **embeddings/**: pre-extracted vectors grouped by dataset and split (train/test); multiple embedding files correspond to different views/encoders
 
 
 
 
34
 
35
- ## Usage
 
 
 
 
36
 
37
- 1. Download the required dataset files under `data/`.
38
- 2. Download the corresponding embedding files under `embeddings/`.
 
 
 
 
 
39
 
40
  ## Notes
41
 
42
- - Embeddings can be large; it is recommended to start with a smaller dataset first.
43
  - If downloads are slow, you may try using a Hugging Face mirror (e.g., `https://hf-mirror.com`).
44
 
45
  ## Citation
46
 
47
- If you use this dataset in your work, please cite our MCA^2 paper.
 
 
 
 
 
 
 
 
 
48
 
49
  ## License
50
 
51
- MIT License
 
1
  ---
 
2
  language:
3
+ - en
4
+ license: mit
 
 
 
 
 
 
 
5
  size_categories:
6
+ - 1K<n<10K
7
+ task_categories:
8
+ - text-classification
9
+ pretty_name: MCA^2 Data & Embeddings
10
+ tags:
11
+ - anomaly-detection
12
+ - multi-view
13
+ - embeddings
14
+ - representation-learning
15
+ - contrastive-learning
16
+ arxiv: 2601.17786
17
  dataset_info:
18
  features:
19
+ - name: data
20
+ dtype: file
21
+ - name: embeddings
22
+ dtype: file
23
  ---
24
 
25
  # MCA^2 Data & Embeddings
26
 
27
+ [**Paper**](https://huggingface.co/papers/2601.17786) | [**GitHub**](https://github.com/yankehan/MCA2)
28
 
29
+ This repository provides the **raw data (`data/`)** and the corresponding **precomputed multi-view embeddings (`embeddings/`)** for **MCA^2**, a two-stage multi-view text anomaly detection (TAD) framework.
30
+
31
+ MCA^2 exploits embeddings from multiple pretrained language models (views) and integrates them via a multi-view reconstruction model, contrastive collaboration, and adaptive allocation to identify anomalies. This dataset release facilitates reproduction by providing pre-extracted vectors, avoiding the need for expensive re-computation across various encoders (e.g., BERT, Stella, Qwen, and OpenAI).
32
 
33
  ## Content
34
 
35
+ - **data/**: Dataset files including train/test splits (e.g., `.npz` and `.jsonl` files).
36
+ - **embeddings/**: Pre-extracted vectors grouped by dataset and split. Multiple embedding files correspond to different "views" or encoders.
37
+
38
+ ## Sample Usage
39
+
40
+ To reproduce the results for a specific dataset (such as OLID) using the MCA^2 framework, you can follow the instructions from the official repository:
41
 
42
+ ```bash
43
+ # 1. Setup environment
44
+ conda create -n MCA2 python=3.9
45
+ conda activate MCA2
46
+ pip install torch sentence-transformers numpy transformers scikit-learn pandas tqdm pyod accelerate
47
 
48
+ # 2. Clone the repository and navigate to the evaluation directory
49
+ git clone https://github.com/yankehan/MCA2
50
+ cd MCA2/multiview_two_stage/eval
51
+
52
+ # 3. Run the evaluation script (ensure data and embeddings are placed in the project directory)
53
+ python ourmethod_eval.py --dataset olid --seeds 41,42,43,44,45
54
+ ```
55
 
56
  ## Notes
57
 
58
+ - Embeddings can be large; it is recommended to start with a smaller dataset like **TAD-OLID** first.
59
  - If downloads are slow, you may try using a Hugging Face mirror (e.g., `https://hf-mirror.com`).
60
 
61
  ## Citation
62
 
63
+ If you use this dataset or the MCA^2 framework in your research, please cite:
64
+
65
+ ```bibtex
66
+ @article{yan2026beyond,
67
+ title={Beyond a Single Perspective: Text Anomaly Detection with Multi-View Language Representations},
68
+ author={Yan, Kehan and others},
69
+ journal={arXiv preprint arXiv:2601.17786},
70
+ year={2026}
71
+ }
72
+ ```
73
 
74
  ## License
75
 
76
+ This dataset is released under the [MIT License](https://opensource.org/licenses/MIT).