Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    ChunkedEncodingError
Message:      ('Connection broken: IncompleteRead(2098436 bytes read, 3154684 more expected)', IncompleteRead(2098436 bytes read, 3154684 more expected))
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/urllib3/response.py", line 779, in _error_catcher
                  yield
                File "/usr/local/lib/python3.12/site-packages/urllib3/response.py", line 925, in _raw_read
                  raise IncompleteRead(self._fp_bytes_read, self.length_remaining)
              urllib3.exceptions.IncompleteRead: IncompleteRead(2098436 bytes read, 3154684 more expected)
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/requests/models.py", line 820, in generate
                  yield from self.raw.stream(chunk_size, decode_content=True)
                File "/usr/local/lib/python3.12/site-packages/urllib3/response.py", line 1091, in stream
                  data = self.read(amt=amt, decode_content=decode_content)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/urllib3/response.py", line 1008, in read
                  data = self._raw_read(amt)
                         ^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/urllib3/response.py", line 903, in _raw_read
                  with self._error_catcher():
                       ^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/contextlib.py", line 158, in __exit__
                  self.gen.throw(value)
                File "/usr/local/lib/python3.12/site-packages/urllib3/response.py", line 803, in _error_catcher
                  raise ProtocolError(arg, e) from e
              urllib3.exceptions.ProtocolError: ('Connection broken: IncompleteRead(2098436 bytes read, 3154684 more expected)', IncompleteRead(2098436 bytes read, 3154684 more expected))
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1483, in _prepare_split_single
                  for key, record in generator:
                                     ^^^^^^^^^
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 609, in wrapped
                  for item in generator(*args, **kwargs):
                              ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 120, in _generate_examples
                  for example_idx, example in enumerate(self._get_pipeline_from_tar(tar_path, tar_iterator)):
                                              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 45, in _get_pipeline_from_tar
                  current_example[field_name] = f.read()
                                                ^^^^^^^^
                File "/usr/local/lib/python3.12/tarfile.py", line 691, in read
                  b = self.fileobj.read(length)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/tarfile.py", line 528, in read
                  buf = self._read(size)
                        ^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/tarfile.py", line 536, in _read
                  return self.__read(size)
                         ^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/tarfile.py", line 566, in __read
                  buf = self.fileobj.read(self.bufsize)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 844, in read_with_retries
                  out = read(*args, **kwargs)
                        ^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 728, in track_read
                  out = f_read(*args, **kwargs)
                        ^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_file_system.py", line 1015, in read
                  return super().read(length)
                         ^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/fsspec/spec.py", line 1846, in read
                  out = self.cache._fetch(self.loc, self.loc + length)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/fsspec/caching.py", line 189, in _fetch
                  self.cache = self.fetcher(start, end)  # new block replaces old
                               ^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_file_system.py", line 969, in _fetch_range
                  r = http_backoff(
                      ^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/huggingface_hub/utils/_http.py", line 310, in http_backoff
                  response = session.request(method=method, url=url, **kwargs)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/requests/sessions.py", line 589, in request
                  resp = self.send(prep, **send_kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/requests/sessions.py", line 724, in send
                  history = [resp for resp in gen]
                                              ^^^
                File "/usr/local/lib/python3.12/site-packages/requests/sessions.py", line 265, in resolve_redirects
                  resp = self.send(
                         ^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/requests/sessions.py", line 746, in send
                  r.content
                File "/usr/local/lib/python3.12/site-packages/requests/models.py", line 902, in content
                  self._content = b"".join(self.iter_content(CONTENT_CHUNK_SIZE)) or b""
                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/requests/models.py", line 822, in generate
                  raise ChunkedEncodingError(e)
              requests.exceptions.ChunkedEncodingError: ('Connection broken: IncompleteRead(2098436 bytes read, 3154684 more expected)', IncompleteRead(2098436 bytes read, 3154684 more expected))
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1342, in compute_config_parquet_and_info_response
                  parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet(
                                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 907, in stream_convert_to_parquet
                  builder._prepare_split(split_generator=splits_generators[split], file_format="parquet")
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1345, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1523, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

__key__
string
__url__
string
jpg
image
json
dict
tissue/TO/TOMM70_UP-O94826_CAB017156_md5-ad10a08a10117052a1de31b2157f1dd1
hf://datasets/nirschl-lab/hpa10m@eae0e1d122009eba0dacb20ea8f9c91699d2bd66/hpa10m_train/hpa10m_train_0002.tar
{ "comments": [], "custom_metadata": { "area_fraction": 0.3395011111111111, "area_px": 3055510, "bboxes": [ [ 182, 261, 2319, 2406 ] ], "caption_1": "Benign duodenum displays moderate cytoplasmic/membranous expression in approximately >75% of glandular...
pathology/AB/ABHD4_UP-Q8TB40_HPA000600_md5-36199b9c502621d2b89951e5d1ca048d
"hf://datasets/nirschl-lab/hpa10m@eae0e1d122009eba0dacb20ea8f9c91699d2bd66/hpa10m_train/hpa10m_train(...TRUNCATED)
{"comments":[],"custom_metadata":{"area_fraction":0.4288652242878424,"area_px":5518024,"bboxes":[[53(...TRUNCATED)
tissue/MA/MAMLD1_UP-Q13495_HPA003923_md5-f9ec6444ba334a968a2e3f46a039ce0e
"hf://datasets/nirschl-lab/hpa10m@eae0e1d122009eba0dacb20ea8f9c91699d2bd66/hpa10m_train/hpa10m_train(...TRUNCATED)
{"comments":[],"custom_metadata":{"area_fraction":0.18505711111111112,"area_px":1665514,"bboxes":[[4(...TRUNCATED)
pathology/MR/MRC1_UP-P22897_HPA045134_md5-07d795db2930db55fe0d7fe64e8b8dcd
"hf://datasets/nirschl-lab/hpa10m@eae0e1d122009eba0dacb20ea8f9c91699d2bd66/hpa10m_train/hpa10m_train(...TRUNCATED)
{"comments":[],"custom_metadata":{"area_fraction":0.3946902222222222,"area_px":3552212,"bboxes":[[61(...TRUNCATED)
tissue/MA/MAPK6_UP-Q16659_HPA030262_md5-ea0f97869306a2a5ef542ac9bcdd8fe2
"hf://datasets/nirschl-lab/hpa10m@eae0e1d122009eba0dacb20ea8f9c91699d2bd66/hpa10m_train/hpa10m_train(...TRUNCATED)
{"comments":[],"custom_metadata":{"area_fraction":0.2971566666666667,"area_px":2674410,"bboxes":[[80(...TRUNCATED)
pathology/ST/STAP1_UP-Q9ULZ2_HPA038529_md5-115a9208a7a38a470e32e377db41f160
"hf://datasets/nirschl-lab/hpa10m@eae0e1d122009eba0dacb20ea8f9c91699d2bd66/hpa10m_train/hpa10m_train(...TRUNCATED)
{"comments":[],"custom_metadata":{"area_fraction":0.5025881111111111,"area_px":4523293,"bboxes":[[51(...TRUNCATED)
tissue/AL/ALDH3A2_UP-P51648_CAB020692_md5-595e177e23d1cc1d92b3d0be8ad88d4c
"hf://datasets/nirschl-lab/hpa10m@eae0e1d122009eba0dacb20ea8f9c91699d2bd66/hpa10m_train/hpa10m_train(...TRUNCATED)
{"comments":[],"custom_metadata":{"area_fraction":0.31927355555555553,"area_px":2873462,"bboxes":[[3(...TRUNCATED)
pathology/HY/HYAL3_UP-O43820_HPA049402_md5-da35c5ba4099c18e015c85b24490bf2b
"hf://datasets/nirschl-lab/hpa10m@eae0e1d122009eba0dacb20ea8f9c91699d2bd66/hpa10m_train/hpa10m_train(...TRUNCATED)
{"comments":[],"custom_metadata":{"area_fraction":0.5395483333333333,"area_px":4855935,"bboxes":[[50(...TRUNCATED)
pathology/CL/CLINT1_UP-Q14677_HPA043280_md5-b8dcf90874d1a53066aeda48859c9a0f
"hf://datasets/nirschl-lab/hpa10m@eae0e1d122009eba0dacb20ea8f9c91699d2bd66/hpa10m_train/hpa10m_train(...TRUNCATED)
{"comments":[],"custom_metadata":{"area_fraction":0.45052333333333333,"area_px":4054710,"bboxes":[[2(...TRUNCATED)
pathology/CY/CYB5R1_UP-Q9UHQ9_HPA010641_md5-bfaf3d9e41e2307ccc443b896688deef
"hf://datasets/nirschl-lab/hpa10m@eae0e1d122009eba0dacb20ea8f9c91699d2bd66/hpa10m_train/hpa10m_train(...TRUNCATED)
{"comments":[],"custom_metadata":{"area_fraction":0.747916,"area_px":6731244,"bboxes":[[0,0,2993,300(...TRUNCATED)
End of preview.

HPA10M Dataset

A large-scale immunohistochemistry (IHC) image dataset derived from the Human Protein Atlas (HPA, https://www.proteinatlas.org/), containing approximately 10.5 million pathology and tissue images with detailed annotations.

Dataset Overview

Statistic Value
Total Images 10,495,672
Training Set 10,493,672 images (10,497 tar files)
Validation Set 2,000 images (1 tar file)
Image Types Pathology (7,970,595) / Tissue (2,525,077)
Format JPEG images + JSON metadata

Directory Structure

hpa10m/
β”œβ”€β”€ README.md                              # This file
β”œβ”€β”€ example_images/                        # Sample images for preview
β”œβ”€β”€ hpa10m_train/                          # Training data (WebDataset tar files)
β”‚   β”œβ”€β”€ hpa10m_train_0000.tar             # Training shards (10,497 files)
β”‚   β”œβ”€β”€ hpa10m_train_0001.tar
β”‚   β”œβ”€β”€ ...
β”œβ”€β”€ hpa10m_validation/                     # Validation data
β”‚   └── hpa10m_validation.tar              # All validation samples (2,000 images)
└── hpa10m_tar_summary/                    # Metadata index files
    └── all.feather                        # Complete index of all images

Data Format

Tar Archives (WebDataset Format)

Each tar file contains paired .jpg and .json files organized by:

  • Image category: pathology/ or tissue/
  • Gene prefix: Two-letter gene name prefix (e.g., AB/, CD/)

JSON Metadata Structure

Each image has a corresponding JSON file with rich annotations:

{
  "metadata": {
    "height": 3000,
    "width": 3000,
    "name": "image_filename.jpg",
    "format": ".jpg"
  },
  "custom_metadata": {
    "gene": "TEKT3",
    "ensembl_id": "ENSG00000125409",
    "uniprot_id": "Q9BXF9",
    "tissue": "skin cancer",
    "cell_type": "Tumor cells",
    "patient_id": 3354,
    "patient_age": 92,
    "patient_sex": "male",
    "snomed_code": "M-80703;T-01000",
    "snomed_text": "Squamous cell carcinoma, NOS;Skin",
    "staining_intensity": "negative",
    "staining_location": "none",
    "staining_quantity": "none",
    "generic_caption": "Immunohistochemical staining of human skin cancer...",
    "caption_1": "Detailed caption describing the image...",
    "caption_2": "Alternative caption...",
    "url": "http://images.proteinatlas.org/...",
    "bboxes": [[x, y, w, h], ...],
    "rle_mask": "encoded_segmentation_mask",
    "area_px": 3883806,
    "area_fraction": 0.431534
  }
}

Index Files (Feather Format)

The hpa10m_tar_summary/all.feather file contains an index of all images with columns:

Column Description
tar_filename Source tar archive name
split Dataset split (train/validation)
name Full path within tar archive
type Image type (pathology/tissue)
img_offset Byte offset of image in tar
img_size Image file size in bytes
json_offset Byte offset of JSON in tar
json_size JSON file size in bytes

Key Annotations

Clinical Information

  • gene: Gene name (e.g., "TEKT3")
  • ensembl_id: Ensembl gene ID (e.g., "ENSG00000125409")
  • uniprot_id: UniProt protein ID (e.g., "Q9BXF9")
  • tissue: Tissue or cancer type (e.g., "skin cancer")
  • uberon_id: UBERON ontology ID
  • cell_type: Cell type (e.g., "Tumor cells")
  • patient_id: Patient identifier
  • patient_age: Patient age
  • patient_sex: Patient sex ("male" / "female")
  • snomed_code: SNOMED-CT code (e.g., "M-80703;T-01000")
  • snomed_text: SNOMED-CT description (e.g., "Squamous cell carcinoma, NOS;Skin")

Staining Characteristics

  • staining_intensity: "negative", "weak", "moderate", "strong"
  • staining_location: "nuclear", "cytoplasmic/membranous", "cytoplasmic/membranous,nuclear", "none"
  • staining_quantity: "none", "<25%", "25-75%", ">75%"

Segmentation Data

  • bboxes: Bounding boxes in [[x, y, width, height], ...] format
  • rle_mask: Segmentation mask
  • area_px: Segmented area in pixels
  • area_fraction: Fraction of image covered by segmentation

Natural Language Captions

  • generic_caption: Standardized description
  • caption_1: Detailed scientific description
  • caption_2: Alternative description

Other Metadata

  • url: Original image URL from Human Protein Atlas
  • image_md5: MD5 hash of original image
  • file_size_kb: Image file size in KB

Usage

Loading Index with Pandas

import pandas as pd

# Load complete index
df = pd.read_feather("hpa10m_tar_summary/all.feather")

# Filter by split
train_df = df[df["split"] == "train"]
val_df = df[df["split"] == "validation"]

# Filter by image type
pathology_df = df[df["type"] == "pathology"]
tissue_df = df[df["type"] == "tissue"]

Data Source

This dataset is derived from the Human Protein Atlas (https://www.proteinatlas.org/), a comprehensive resource for protein expression in human tissues and cancers.

License

Please refer to the Human Protein Atlas data usage terms at https://www.proteinatlas.org/about/licence for licensing information.

πŸ“§ Contact

For questions or suggestions, please contact: jjnirschl@wisc.edu or zhi.huang@pennmedicine.upenn.edu

Downloads last month
674