Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    UnicodeDecodeError
Message:      'utf-8' codec can't decode byte 0xa7 in position 19675: invalid start byte
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 228, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 3422, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2187, in _head
                  return next(iter(self.iter(batch_size=n)))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2391, in iter
                  for key, example in iterator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1882, in __iter__
                  for key, pa_table in self._iter_arrow():
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1904, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 499, in _iter_arrow
                  for key, pa_table in iterator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 346, in _iter_arrow
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/csv/csv.py", line 188, in _generate_tables
                  csv_file_reader = pd.read_csv(file, iterator=True, dtype=dtype, **self.config.pd_read_csv_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/streaming.py", line 73, in wrapper
                  return function(*args, download_config=download_config, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 1199, in xpandas_read_csv
                  return pd.read_csv(xopen(filepath_or_buffer, "rb", download_config=download_config), **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1026, in read_csv
                  return _read(filepath_or_buffer, kwds)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 620, in _read
                  parser = TextFileReader(filepath_or_buffer, **kwds)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1620, in __init__
                  self._engine = self._make_engine(f, self.engine)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1898, in _make_engine
                  return mapping[engine](f, **self.options)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/c_parser_wrapper.py", line 93, in __init__
                  self._reader = parsers.TextReader(src, **kwds)
                File "parsers.pyx", line 574, in pandas._libs.parsers.TextReader.__cinit__
                File "parsers.pyx", line 663, in pandas._libs.parsers.TextReader._get_header
                File "parsers.pyx", line 874, in pandas._libs.parsers.TextReader._tokenize_rows
                File "parsers.pyx", line 891, in pandas._libs.parsers.TextReader._check_tokenize_status
                File "parsers.pyx", line 2053, in pandas._libs.parsers.raise_parser_error
              UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa7 in position 19675: invalid start byte

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

FNFC: Functional & Non-Functional Requirements Classification Dataset

Link: Hugging Face – FNFC Dataset

Overview

The FNFC dataset is a labeled collection of 7,060 software requirement statements categorized into 14 requirement classes, designed for research and modeling in requirements classification. It was created by refining and re-labeling the Fault-prone SRS Dataset from Kaggle, ensuring high-quality annotations through expert review.

Data Collection & Labeling

The labeling process was carried out by five professionals with expertise in software project leadership, systems analysis, or software requirements engineering:

  • 2 experts with over 10 years of experience
  • 3 experts with 2–3 years of experience in requirements engineering

Each expert received:

  • A questionnaire
  • Relevant documentation
  • Clear labeling criteria for identifying functional and non-functional requirements

Experts were given 20 days to complete labeling at their own pace. Each record was assigned to one of the following classes:

Classes

  • F – Functional Requirements
  • A – Availability
  • AU – Autonomy
  • FT – Fault Tolerance
  • LF – Look and Feel
  • LL – Legal & Licensing
  • M – Maintainability
  • O – Inter-Operability
  • P – Portability
  • PE – Performance
  • R – Reliability
  • SC – Scalability
  • SE – Security
  • US – Usability

Dataset Structure

Field Description
id Unique record identifier
requirement Requirement statement text
label One of the 14 defined classes

Purpose

The dataset provides a robust benchmark for:

  • Machine learning models for requirements classification
  • Natural language processing experiments in software engineering
  • Studies comparing classification methods for functional vs. non-functional requirements

Access

The dataset is publicly available:
Hugging Face – FNFC Dataset

Citation

If you use this dataset in academic research, please cite:

@dataset{fnfc_dataset_2025,
  title={FNFC: Functional & Non-Functional Requirements Classification Dataset},
  author={Mashhad Azad University},
  year={2025},
  url={https://huggingface.co/datasets/Mashhad-Azad-University/FNFC-Functional_Non-Functional_Classification}
}

## Contact
Created by Mahdi Kabootari & Younes Abdeahad

📬 kabootarimahdi2@gmail.com
📬 abdeahad.y3@gmail.com
📬 e.kheirkhah@gmail.com
Downloads last month
21