Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    UnicodeDecodeError
Message:      'utf-8' codec can't decode byte 0xed in position 44: invalid continuation byte
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 233, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2998, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1918, in _head
                  return _examples_to_batch(list(self.take(n)))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2093, in __iter__
                  for key, example in ex_iterable:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1576, in __iter__
                  for key_example in islice(self.ex_iterable, self.n - ex_iterable_num_taken):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 279, in __iter__
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/csv/csv.py", line 188, in _generate_tables
                  csv_file_reader = pd.read_csv(file, iterator=True, dtype=dtype, **self.config.pd_read_csv_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/streaming.py", line 75, in wrapper
                  return function(*args, download_config=download_config, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 1212, in xpandas_read_csv
                  return pd.read_csv(xopen(filepath_or_buffer, "rb", download_config=download_config), **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1026, in read_csv
                  return _read(filepath_or_buffer, kwds)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 620, in _read
                  parser = TextFileReader(filepath_or_buffer, **kwds)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1620, in __init__
                  self._engine = self._make_engine(f, self.engine)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1898, in _make_engine
                  return mapping[engine](f, **self.options)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/c_parser_wrapper.py", line 93, in __init__
                  self._reader = parsers.TextReader(src, **kwds)
                File "parsers.pyx", line 574, in pandas._libs.parsers.TextReader.__cinit__
                File "parsers.pyx", line 663, in pandas._libs.parsers.TextReader._get_header
                File "parsers.pyx", line 874, in pandas._libs.parsers.TextReader._tokenize_rows
                File "parsers.pyx", line 891, in pandas._libs.parsers.TextReader._check_tokenize_status
                File "parsers.pyx", line 2053, in pandas._libs.parsers.raise_parser_error
              UnicodeDecodeError: 'utf-8' codec can't decode byte 0xed in position 44: invalid continuation byte

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

ArQuAD: An Expert-Annotated Arabic Machine Reading Comprehension Dataset

Overview

ArQuAD is an expert-annotated Arabic Machine Reading Comprehension (MRC) dataset. It comprises 16,020 questions posed by language experts on passages extracted from the most frequently visited Arabic Wikipedia articles. Each question's answer is a text segment from the corresponding reading passage.

Citation

If you use ArQuAD in your research, please cite the following paper:

bibtex @article{obeidat2024arquad, title={ArQuAD: An Expert-Annotated Arabic Machine Reading Comprehension Dataset}, author={Obeidat, Rasha and Al-Harbi, Marwa and Al-Ayyoub, Mahmoud and Alawneh, Luay}, journal={Cognitive Computation}, pages={1--20}, year={2024}, publisher={Springer} }

Dataset Description

ArQuAD consists of 16,020 question-answer pairs created by Arabic language specialists with BA and MA degrees. The passages are sourced from 1335 of the most viewed Arabic Wikipedia articles, covering a wide range of topics including sports, politics, technology, religion, and more.

Structure

The dataset is provided in both CSV and SQuAD JSON formats with the following columns:

  • passage: The original passage from Wikipedia.
  • question: The question posed by the annotator.
  • answer: The minimal text span from the passage that answers the question.

Statistics

The dataset includes:

  • Total pairs: 16,020
  • Passages: 4,005
  • Domains covered: Various (sports, politics, technology, etc.)

Key Features

  • Expert-Annotated: Questions and answers are created by language experts, ensuring high quality and relevance.
  • Diverse: Covers a wide range of topics to ensure comprehensive testing of MRC models, also includes a mix of factoid and non-factoid questions.

Usage

Download the CSV file from the repository to use this dataset and load it into your preferred data analysis tool.

Contact

or any questions or issues regarding the dataset, please contact: Rasha Obeidat:rmobeidat@just.edu.jo (or any of the authors)

Downloads last month
14

Models trained or fine-tuned on RashaMObeidat/ArQuAD