Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ArrowInvalid
Message:      JSON parse error: Column() changed from object to array in row 0
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 160, in _generate_tables
                  df = pandas_read_json(f)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 38, in pandas_read_json
                  return pd.read_json(path_or_buf, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 815, in read_json
                  return json_reader.read()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 1025, in read
                  obj = self._get_object_parser(self.data)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 1051, in _get_object_parser
                  obj = FrameParser(json, **kwargs).parse()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 1187, in parse
                  self._parse()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 1403, in _parse
                  ujson_loads(json, precise_float=self.precise_float), dtype=None
              ValueError: Unexpected character found when decoding array value (2)
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 228, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 3339, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2096, in _head
                  return next(iter(self.iter(batch_size=n)))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2300, in iter
                  for key, example in iterator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1856, in __iter__
                  for key, pa_table in self._iter_arrow():
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1878, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 476, in _iter_arrow
                  for key, pa_table in iterator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 323, in _iter_arrow
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 163, in _generate_tables
                  raise e
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 137, in _generate_tables
                  pa_table = paj.read_json(
                File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: JSON parse error: Column() changed from object to array in row 0

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Adversarial Machine Learning TextFooler Dataset Overview This dataset, adversarial_machine_learning_textfooler_dataset.jsonl, is designed for research and evaluation in adversarial machine learning, specifically focusing on text-based adversarial attacks. It contains pairs of original and adversarial text samples, primarily generated using the TextFooler attack method, along with other adversarial techniques such as Homoglyph, Number Substitution, Typo, Emoji, Semantic Shift, Paraphrase, and Zero-Width character injections. The dataset is intended for testing the robustness of natural language processing (NLP) models against adversarial perturbations. Dataset Description The dataset is stored in JSON Lines (.jsonl) format, where each line represents a single record with the following fields:

original_text: The original, unaltered text input. adversarial_text: The perturbed text after applying an adversarial attack. true_label: The sentiment or classification label of the original text (positive, negative, or neutral). is_adversarial: Boolean indicating whether the text is adversarial (true for all entries in this dataset). attack_type: The type of adversarial attack applied (e.g., TextFooler, Homoglyph, Number Substitution, Typo, Emoji, Semantic Shift, Paraphrase, Cyrillic Homoglyph, Zero-Width Space, etc.).

Example Record { "original_text": "This was surprisingly good.", "adversarial_text": "This was surprisingly excellent.", "true_label": "positive", "is_adversarial": true, "attack_type": "TextFooler" }

Dataset Statistics

Total Records: 400+ (exact count depends on final dataset size after deduplication). Label Distribution: Positive: ~50% Negative: ~40% Neutral: ~10%

Attack Types: TextFooler: Synonym-based word replacement. Homoglyph: Substitution with visually similar characters (e.g., Cyrillic or Greek letters). Number Substitution: Replacing letters with numbers (e.g., e with 3). Typo: Intentional misspellings. Emoji: Addition of emoji characters. Semantic Shift: Rewording with similar meaning but different phrasing. Paraphrase: Complete rephrasing of the sentence. Zero-Width Space/Non-Joiner/Joiner: Injection of invisible Unicode characters. Cyrillic/Greek Homoglyph: Use of non-Latin characters resembling Latin ones.

Usage This dataset is ideal for:

Adversarial Robustness Testing: Evaluate NLP models (e.g., sentiment classifiers) against adversarial attacks. Attack Detection: Develop methods to detect adversarial perturbations in text inputs. Model Hardening: Train models to improve resilience against adversarial attacks. Research: Study the impact of various adversarial attack types on NLP model performance.

Loading the Dataset You can load the dataset using Python with libraries like json or pandas: import json

dataset = [] with open("adversarial_machine_learning_textfooler_dataset.jsonl", "r") as file: for line in file: dataset.append(json.loads(line))

Example: Print first record

print(dataset[0])

Example Analysis To analyze the dataset, you can use Python to compute statistics or visualize attack type distribution: import pandas as pd from collections import Counter import matplotlib.pyplot as plt

Load dataset

df = pd.read_json("adversarial_machine_learning_textfooler_dataset.jsonl", lines=True)

Count attack types

attack_counts = Counter(df["attack_type"]) plt.bar(attack_counts.keys(), attack_counts.values()) plt.xticks(rotation=45) plt.title("Distribution of Attack Types") plt.xlabel("Attack Type") plt.ylabel("Count") plt.savefig("attack_type_distribution.png")

Installation and Dependencies To work with this dataset, ensure you have the following Python libraries:

json: For parsing JSON Lines format. pandas: For data manipulation and analysis. matplotlib: For visualization (optional).

Install dependencies: pip install pandas matplotlib

Notes

Deduplication: Some records may be duplicated due to repetitive attack patterns. It is recommended to deduplicate the dataset before use. Ethical Considerations: This dataset is for research purposes only. Do not use it to create or deploy malicious adversarial attacks in production systems. Limitations: The dataset focuses on English text and may not generalize to other languages. Some attack types (e.g., Zero-Width characters) may be less effective on certain NLP models.

Contributing Contributions to expand the dataset with new attack types or additional samples are welcome. Please submit a pull request or contact the maintainers for collaboration. License This dataset is released under the MIT License. Contact For questions or issues, please open an issue on the repository or contact the dataset maintainers at sunny48445@gmail.com.

Downloads last month
17