Datasets:

DOI:
License:
Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ParserError
Message:      Error tokenizing data. C error: Expected 14 fields in line 5, saw 15

Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 231, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 3335, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2096, in _head
                  return next(iter(self.iter(batch_size=n)))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2296, in iter
                  for key, example in iterator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1856, in __iter__
                  for key, pa_table in self._iter_arrow():
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1878, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 476, in _iter_arrow
                  for key, pa_table in iterator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 323, in _iter_arrow
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/csv/csv.py", line 190, in _generate_tables
                  for batch_idx, df in enumerate(csv_file_reader):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1843, in __next__
                  return self.get_chunk()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1985, in get_chunk
                  return self.read(nrows=size)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1923, in read
                  ) = self._engine.read(  # type: ignore[attr-defined]
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/c_parser_wrapper.py", line 234, in read
                  chunks = self._reader.read_low_memory(nrows)
                File "parsers.pyx", line 850, in pandas._libs.parsers.TextReader.read_low_memory
                File "parsers.pyx", line 905, in pandas._libs.parsers.TextReader._read_rows
                File "parsers.pyx", line 874, in pandas._libs.parsers.TextReader._tokenize_rows
                File "parsers.pyx", line 891, in pandas._libs.parsers.TextReader._check_tokenize_status
                File "parsers.pyx", line 2061, in pandas._libs.parsers.raise_parser_error
              pandas.errors.ParserError: Error tokenizing data. C error: Expected 14 fields in line 5, saw 15

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Dataset Card for ToxiCore

This dataset card provides documentation for the ToxiCore dataset, designed to support the development of AI models in toxicology, including toxin detection and antidote recommendation systems.

Dataset Details

Dataset Description

ToxiCore is a structured dataset containing data on toxic compounds, including their chemical characteristics, associated symptoms, and known treatments or antidotes. It is intended to help train and evaluate machine learning models capable of identifying toxic agents and recommending countermeasures. This dataset is particularly useful in healthcare, emergency response, and pharmacological research.

  • Curated by: Hayden Banz
  • Funded by [optional]: More Information Needed
  • Shared by [optional]: Hayden Banz
  • Language(s): English
  • License: CC BY-NC-ND 4.0

Dataset Sources

Uses

Direct Use

This dataset is suitable for:

  • Machine learning research in toxicology
  • Building antidote recommendation systems
  • Educational purposes in pharmacology and healthcare AI

Out-of-Scope Use

This dataset is not intended for:

  • Real-time diagnosis or treatment in medical settings without professional validation
  • Use in commercial or clinical tools without regulatory approval
  • Any application involving personal or sensitive health data processing

Dataset Structure

The dataset includes fields such as:

  • compound_name
  • toxicity_level
  • chemical_properties
  • symptoms
  • recommended_antidote

More detailed structure and formatting information can be found in the dataset repository.

Dataset Creation

Curation Rationale

This dataset was created to provide a resource for research in AI-based toxicology systems, with a focus on public health applications and academic study.

Source Data

Data Collection and Processing

Data was collected from publicly available scientific literature and verified toxicology resources. Processing steps included cleaning, normalization, and categorization of compound-related data.

Who are the source data producers?

The original data comes from medical toxicology literature, chemical safety databases, and government or public health publications.

Annotations [optional]

Annotation Process

Annotations (e.g., symptom tagging, antidote mapping) were manually verified based on referenced sources.

Who are the annotators?

The dataset was reviewed and annotated by the curator, Hayden Banz, with assistance from open-source contributors.

Personal and Sensitive Information

The dataset does not contain personal, sensitive, or private information.

Bias, Risks, and Limitations

While ToxiCore provides a valuable base for research, it may not include rare toxins, region-specific compounds, or all possible antidotes. Use with caution in high-stakes scenarios.

Recommendations

This dataset should be used in conjunction with expert guidance and clinical validation. It is recommended for academic, research, and non-commercial experimentation.

Citation [optional]

BibTeX:

@misc{toxicore2025,
  author = {Hayden Banz},
  title = {ToxiCore: Dataset for Toxin Detection and Antidote Recommendation},
  year = {2025},
  howpublished = {\url{https://huggingface.co/datasets/haydenbanz/ToxiCore}},
}
Downloads last month
7