Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    TypeError
Message:      object of type 'numpy.int64' has no len()
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1593, in _prepare_split_single
                  example = self.info.features.encode_example(record) if self.info.features is not None else record
                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 2059, in encode_example
                  return encode_nested_example(self, example)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 1350, in encode_nested_example
                  {k: encode_nested_example(schema[k], obj.get(k), level=level + 1) for k in schema}
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 1364, in encode_nested_example
                  changed = bool(encode_nested_example(sub_schema, first_elmt, level=level + 1) != first_elmt)
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 1358, in encode_nested_example
                  if len(obj) > 0:
                     ^^^^^^^^
              TypeError: object of type 'numpy.int64' has no len()
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1347, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 980, in convert_to_parquet
                  builder.download_and_prepare(
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 884, in download_and_prepare
                  self._download_and_prepare(
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1634, in _download_and_prepare
                  super()._download_and_prepare(
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 947, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1438, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1617, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

npy
list
__key__
string
__url__
string
[[-0.01931699924170971,-0.387757271528244,0.3160839378833771,0.6913529634475708,0.5131757855415344,-(...TRUNCATED)
features_DINOv2/Normal_001
"hf://datasets/torchmil/TAD_MIL@1b0dac54685712c2d60e8455673af471638cc9f1/dataset/features/features_D(...TRUNCATED)
[[0.07331012934446335,-0.1349535584449768,0.8443528413772583,-0.18345755338668823,0.7282111048698425(...TRUNCATED)
features_DINOv2/Normal_002
"hf://datasets/torchmil/TAD_MIL@1b0dac54685712c2d60e8455673af471638cc9f1/dataset/features/features_D(...TRUNCATED)
[[0.21483783423900604,0.9808999300003052,0.06906633079051971,-0.6431416273117065,-0.1028560400009155(...TRUNCATED)
features_DINOv2/Normal_004
"hf://datasets/torchmil/TAD_MIL@1b0dac54685712c2d60e8455673af471638cc9f1/dataset/features/features_D(...TRUNCATED)
[[-2.288323163986206,-1.430830478668213,1.9177966117858887,0.9087327122688293,1.638323426246643,-3.4(...TRUNCATED)
features_DINOv2/Normal_005
"hf://datasets/torchmil/TAD_MIL@1b0dac54685712c2d60e8455673af471638cc9f1/dataset/features/features_D(...TRUNCATED)
[[-1.2022329568862915,-1.2713879346847534,-1.5229153633117676,-3.0790579319000244,0.0599834918975830(...TRUNCATED)
features_DINOv2/Normal_006
"hf://datasets/torchmil/TAD_MIL@1b0dac54685712c2d60e8455673af471638cc9f1/dataset/features/features_D(...TRUNCATED)
[[-0.24383217096328735,0.9982306957244873,0.5605013370513916,-2.8763835430145264,-1.039210557937622,(...TRUNCATED)
features_DINOv2/Normal_007
"hf://datasets/torchmil/TAD_MIL@1b0dac54685712c2d60e8455673af471638cc9f1/dataset/features/features_D(...TRUNCATED)
[[-1.5381097793579102,0.06459826976060867,0.40360569953918457,-0.2697170078754425,1.6290339231491089(...TRUNCATED)
features_DINOv2/Normal_008
"hf://datasets/torchmil/TAD_MIL@1b0dac54685712c2d60e8455673af471638cc9f1/dataset/features/features_D(...TRUNCATED)
[[-0.164587140083313,-1.760026216506958,0.8304593563079834,-1.7115209102630615,1.9816749095916748,-5(...TRUNCATED)
features_DINOv2/Normal_009
"hf://datasets/torchmil/TAD_MIL@1b0dac54685712c2d60e8455673af471638cc9f1/dataset/features/features_D(...TRUNCATED)
[[-2.812544584274292,-1.8826948404312134,0.042688705027103424,-1.5579880475997925,0.862194836139679,(...TRUNCATED)
features_DINOv2/Normal_011
"hf://datasets/torchmil/TAD_MIL@1b0dac54685712c2d60e8455673af471638cc9f1/dataset/features/features_D(...TRUNCATED)
[[-1.3600884675979614,0.8641244769096375,1.9129576683044434,-1.0556408166885376,2.0140607357025146,-(...TRUNCATED)
features_DINOv2/Normal_013
"hf://datasets/torchmil/TAD_MIL@1b0dac54685712c2d60e8455673af471638cc9f1/dataset/features/features_D(...TRUNCATED)
End of preview.

YAML Metadata Warning: empty or missing yaml metadata in repo card

Check out the documentation for more information.

Important. This dataset is part of the torchmil library.

Traffic Anomaly Detection (TAD) dataset adapted for Multiple Instance Learning (MIL).

About the Original Traffic Anomaly Detection (TAD) Dataset

The original Traffic Anomaly Detection (TAD) dataset contains video clips. Each clip is labeled to indicate whether it contains an anomaly or not; however, frame-level labels are not available

Dataset Description

We have preprocessed the videos by computing features for each frame using various feature extractors.

  • A video is labeled as positive (label=1) if it contains evidence of traffic anomaly.
  • A video is labeled as positive (label=1) if it contains at least one positive frame.

This means a video is considered positive if there is any evidence of traffic anomaly.

Directory structure

The following directory structure is expected:

root
β”œβ”€β”€ features
β”‚   β”œβ”€β”€ features_{features}
β”‚   β”‚   β”œβ”€β”€ video1.npy
β”‚   β”‚   β”œβ”€β”€ video2.npy
β”‚   β”‚   └── ...
β”œβ”€β”€ labels
β”‚   β”œβ”€β”€ video1.npy
β”‚   β”œβ”€β”€ video2.npy
β”‚   └── ...
└── splits.csv

Each .npy file corresponds to a video. The splits.csv file defines train/test splits for standardized experimentation.

Downloads last month
8