_id
stringlengths 24
24
| id
stringlengths 4
121
| author
stringlengths 2
42
| cardData
stringlengths 2
1.09M
| disabled
bool 1
class | gated
stringclasses 3
values | lastModified
timestamp[ns]date 2021-02-05 16:03:35
2026-01-28 13:35:50
| likes
int64 0
9.58k
| trendingScore
float64 0
122
| private
bool 1
class | sha
stringlengths 40
40
| description
stringlengths 0
6.67k
β | downloads
int64 0
1.81M
| downloadsAllTime
int64 0
143M
| tags
listlengths 1
7.92k
| createdAt
timestamp[ns]date 2022-03-02 23:29:22
2026-01-28 13:35:20
| paperswithcode_id
stringclasses 687
values | citation
stringlengths 0
10.7k
β |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
69524c8ad001e56220ced9bc
|
Alibaba-Apsara/Superior-Reasoning-SFT-gpt-oss-120b
|
Alibaba-Apsara
|
{"license": "cc-by-4.0", "task_categories": ["text-generation"], "language": ["en"], "tags": ["code", "math", "scientific-qa", "instruction-following", "reasoning", "thinking", "gpt-oss-120b", "distill"], "size_categories": ["435K"], "configs": [{"config_name": "stage1", "data_files": "Superior-Reasoning-SFT-gpt-oss-120b-stage1-train-data.jsonl", "features": [{"name": "uuid", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "domain", "dtype": "string"}, {"name": "meta", "dtype": "string"}]}, {"config_name": "stage2", "data_files": "Superior-Reasoning-SFT-gpt-oss-120b-stage2-train-data.jsonl", "features": [{"name": "uuid", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "domain", "dtype": "string"}, {"name": "meta", "dtype": "string"}]}]}
| false
|
False
| 2026-01-15T06:39:55
| 290
| 122
| false
|
e9d54e2a3f376fd5c62cafd3c4c99b304cdda698
|
Superior-Reasoning-SFT-gpt-oss-120b
Β
Β
Β
Β
Β
π Overview
The Superior-Reasoning-SFT-gpt-oss-120b dataset is a high-quality, open-source collection containing 435K samples designed to democratize the training of high-performance Long Chain-of-Thought (Long-CoT) models. Unlike standard distilled datasets that rely on random sampling or heuristic filtering, Superior-Reasoning-SFT-gpt-oss-120b is constructed using a principled Distribution-Aligned Sequence⦠See the full description on the dataset page: https://huggingface.co/datasets/Alibaba-Apsara/Superior-Reasoning-SFT-gpt-oss-120b.
| 23,135
| 23,135
|
[
"task_categories:text-generation",
"language:en",
"license:cc-by-4.0",
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:polars",
"library:mlcroissant",
"arxiv:2601.09088",
"arxiv:2512.20908",
"region:us",
"code",
"math",
"scientific-qa",
"instruction-following",
"reasoning",
"thinking",
"gpt-oss-120b",
"distill"
] | 2025-12-29T09:40:26
| null | null |
696b2406e6c69ff4f49745f4
|
sojuL/RubricHub_v1
|
sojuL
|
{"license": "apache-2.0", "language": ["zh", "en"], "tags": ["medical", "science", "wirting", "isntruction", "chat", "general"], "pretty_name": "RubricHub", "size_categories": ["100K<n<1M"], "task_categories": ["text-generation", "reinforcement-learning", "question-answering"]}
| false
|
False
| 2026-01-20T07:16:51
| 128
| 114
| false
|
bec50742963ed3672391fecbcc4b60067b9fa8bc
|
RubricHub_v1
RubricHub is a large-scale (approximately 110K), multi-domain dataset that provides high-quality rubric-based supervision for open-ended generation tasks. It is constructed via an automated coarse-to-fine rubric generation framework, which integrates principle-guided synthesis, multi-model aggregation, and difficulty evolution to produce comprehensive and highly discriminative evaluation criteria, overcoming the supervision ceiling of coarse or static rubrics.β¦ See the full description on the dataset page: https://huggingface.co/datasets/sojuL/RubricHub_v1.
| 581
| 581
|
[
"task_categories:text-generation",
"task_categories:reinforcement-learning",
"task_categories:question-answering",
"language:zh",
"language:en",
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:polars",
"library:mlcroissant",
"arxiv:2601.08430",
"region:us",
"medical",
"science",
"wirting",
"isntruction",
"chat",
"general"
] | 2026-01-17T05:54:14
| null | null |
6969078587ce326016ddda46
|
lightonai/LightOnOCR-mix-0126
|
lightonai
|
{"dataset_info": {"features": [{"name": "key", "dtype": "string"}, {"name": "page_idx", "dtype": "int64"}, {"name": "content", "dtype": "string"}, {"name": "metadata", "struct": [{"name": "element_counts", "struct": [{"name": "formulas", "dtype": "int64"}, {"name": "images", "dtype": "int64"}, {"name": "tables", "dtype": "int64"}]}, {"name": "token_length", "dtype": "int64"}]}], "splits": [{"name": "pdfa_train", "num_bytes": 38584453222, "num_examples": 16428833}, {"name": "pdfa_validation", "num_bytes": 4689687, "num_examples": 2000}], "download_size": 21111271721, "dataset_size": 38589142909}, "configs": [{"config_name": "default", "data_files": [{"split": "pdfa_train", "path": "data/pdfa_train-*"}, {"split": "pdfa_validation", "path": "data/pdfa_validation-*"}]}], "license": "other", "task_categories": ["image-to-text"], "language": ["en", "fr", "de", "es", "it", "ja", "ru", "pl", "nl", "zh", "pt", "bg", "tr", "ur", "hi", "th", "ar", "sw", "el", "vi"], "tags": ["ocr"], "size_categories": ["10M<n<100M"], "pretty_name": "LightOnOCR-mix"}
| false
|
False
| 2026-01-26T16:29:46
| 101
| 85
| false
|
af0218b88fc337468d91f9c107ae33453f65cf30
|
LightOnOCR-mix-0126
LightOnOCR-mix-0126 is a large-scale OCR training dataset built via distillation: a strong visionβlanguage model is prompted to produce naturally ordered full-page transcriptions (Markdown with LaTeX math spans and HTML tables) from rendered document pages. The dataset is designed as supervision for end-to-end OCR / document-understanding models that aim to output clean, human-readable text in a consistent format.
This repository releases the PDFA-derived⦠See the full description on the dataset page: https://huggingface.co/datasets/lightonai/LightOnOCR-mix-0126.
| 1,175
| 1,175
|
[
"task_categories:image-to-text",
"language:en",
"language:fr",
"language:de",
"language:es",
"language:it",
"language:ja",
"language:ru",
"language:pl",
"language:nl",
"language:zh",
"language:pt",
"language:bg",
"language:tr",
"language:ur",
"language:hi",
"language:th",
"language:ar",
"language:sw",
"language:el",
"language:vi",
"license:other",
"size_categories:10M<n<100M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:polars",
"library:mlcroissant",
"arxiv:2601.14251",
"region:eu",
"ocr"
] | 2026-01-15T15:28:05
| null | null |
69676b65aeecdadc87f8da8e
|
facebook/action100m-preview
|
facebook
|
{"license": "fair-noncommercial-research-license", "language": ["en"], "tags": ["video", "action"], "size_categories": ["10M<n<100M"]}
| false
|
False
| 2026-01-14T14:24:13
| 127
| 79
| false
|
c9404b5c9772d6883a2f062945273f171b585275
|
Action100M: A Large-scale Video Action Dataset
Our data can be loaded from the π€ huggingface repo at facebook/action100m-preview where we released 10% of the full Action100M for preview. For examples of loading from local parquet files (from cloned repo) and visualization, see our GitHub repo.
from datasets import load_dataset
dataset = load_dataset(
"parquet",
data_files=f"hf://datasets/facebook/Action100M-preview/data/*.parquet",
streaming=True,
)
it =β¦ See the full description on the dataset page: https://huggingface.co/datasets/facebook/action100m-preview.
| 3,575
| 3,575
|
[
"language:en",
"license:fair-noncommercial-research-license",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"modality:video",
"library:datasets",
"library:dask",
"library:polars",
"library:mlcroissant",
"region:us",
"video",
"action"
] | 2026-01-14T10:09:41
| null | null |
68ba0ffd343a84103b603c45
|
Pageshift-Entertainment/LongPage
|
Pageshift-Entertainment
|
{"pretty_name": "LongPage", "dataset_name": "LongPage", "library_name": "datasets", "language": ["en"], "license": ["cc-by-4.0", "other"], "task_categories": ["text-generation"], "task_ids": ["language-modeling", "text2text-generation"], "size_categories": ["n<1K"], "source_datasets": ["original"], "annotations_creators": ["machine-generated"], "language_creators": ["found"], "multilinguality": ["monolingual"], "tags": ["long-context", "cot", "reasoning", "creative-writing", "Cold start reasoning data"], "pretty_visual": "assets/cover_image.png"}
| false
|
False
| 2026-01-20T14:01:26
| 131
| 75
| false
|
27d907b6a9f92682110e68ef91f001b4812698d6
|
Overview ππ
The first comprehensive dataset for training AI models to write complete novels with sophisticated reasoning.
π§ Hierarchical Reasoning Architecture β Multi-layered planning traces including character archetypes, story arcs, world rules, and scene breakdowns. A complete cognitive roadmap for long-form narrative construction.
π Complete Novel Coverage β From 40,000 to 600,000+ tokens per book, spanning novellas to epic series with consistent quality throughout.
β‘β¦ See the full description on the dataset page: https://huggingface.co/datasets/Pageshift-Entertainment/LongPage.
| 5,001
| 16,305
|
[
"task_categories:text-generation",
"task_ids:language-modeling",
"task_ids:text2text-generation",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"license:other",
"size_categories:1K<n<10K",
"format:parquet",
"format:optimized-parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:polars",
"library:mlcroissant",
"region:us",
"long-context",
"cot",
"reasoning",
"creative-writing",
"Cold start reasoning data"
] | 2025-09-04T22:17:33
| null | null |
696ddc1ba806b4bfbcfc0224
|
opendatalab/ChartVerse-SFT-1800K
|
opendatalab
|
{"license": "apache-2.0", "language": ["en"], "task_categories": ["visual-question-answering", "image-text-to-text"], "tags": ["chart", "reasoning", "vision-language", "multimodal", "chart-understanding", "CoT", "SFT", "large-scale"], "size_categories": ["1M<n<10M"]}
| false
|
False
| 2026-01-27T03:00:05
| 75
| 71
| false
|
eadd63e3b941a766786ae5e5f987a7b79f6a7335
|
ChartVerse-SFT-1800K is an extended large-scale chart reasoning dataset with Chain-of-Thought (CoT) annotations, developed as part of the opendatalab/ChartVerse project. For more details about our method, datasets, and full model series, please visit our Project Page.
This dataset contains all verified correct samples without failure rate filtering. Unlike SFT-600K which excludes easy samples (r=0), SFT-1800K includes the complete set of truth-anchored QA pairs for maximum coverage and scale.β¦ See the full description on the dataset page: https://huggingface.co/datasets/opendatalab/ChartVerse-SFT-1800K.
| 1,373
| 1,373
|
[
"task_categories:visual-question-answering",
"task_categories:image-text-to-text",
"language:en",
"license:apache-2.0",
"size_categories:1M<n<10M",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:polars",
"library:mlcroissant",
"arxiv:2601.13606",
"region:us",
"chart",
"reasoning",
"vision-language",
"multimodal",
"chart-understanding",
"CoT",
"SFT",
"large-scale"
] | 2026-01-19T07:24:11
| null | null |
69660562d230db5333514344
|
FOMO-MRI/FOMO300K
|
FOMO-MRI
|
{"license": "other", "license_name": "license", "tags": ["brain", "mri", "ssl", "foundation_model", "3d", "image"], "pretty_name": "FOMO-300K", "size_categories": ["100K<n<1M"], "task_categories": ["image-feature-extraction", "zero-shot-classification"], "viewer": false, "extra_gated_prompt": "\nThis collection of datasets is released under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) license. Each individual dataset within the collection retains its original license, which is reported in the corresponding dataset folder. Some datasets are additionally subject to Data Use Agreements (DUAs), which are reported below and in the relevant dataset folders. Users must comply with the applicable license terms and any associated DUAs.\n\nYou are free to:\nShare \u2014 copy and redistribute the material in any medium or format\nAdapt \u2014 remix, transform, and build upon the material\nThe licensor cannot revoke these freedoms as long as you follow the license terms.\n\nUnder the following terms:\nAttribution \u2014 You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.\nNonCommercial \u2014 You may not use the material for commercial purposes.\nShareAlike \u2014 If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original.\nNo additional restrictions \u2014 You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.\n\nNotices:\nYou do not have to comply with the license for elements of the material in the public domain or where your use is permitted by an applicable exception or limitation.\n\nNo warranties are given. The license may not give you all of the permissions necessary for your intended use. For example, other rights such as publicity, privacy, or moral rights may limit how you use the material.\n\nFull license: https://creativecommons.org/licenses/by-nc-sa/4.0/\n\nDUAs:\n\nOASIS Data Use Agreement\n\nThe OASIS data are distributed to the greater scientific community under the following terms:\n1. User will not use the OASIS datasets, either alone or in concert with any other information, to make any effort to identify or contact individuals who are or may be the sources of the information in the dataset. If User inadvertently receives identifiable information or otherwise identifies a subject, User will immediately notify OASIS and follow OASISs reasonable written instructions, which may include the return or destruction of identifiable information.\n2. User is strictly prohibited from generating or using images or comparable representations of the face, head, or body for facial recognition, re-identification, or other purposes that could allow the identities of research participants to be readily ascertained.\n3. User will not use or further disclose the OASIS-3 or OASIS-4 except as required by law. User shall not share, distribute, or otherwise make available the OASIS data, in whole or in part, to any third party, including collaborators, without prior written permission from OASIS. All collaborators must independently apply for access and agree to these terms. Additionally, User will not use or further disclose any derivative works or derivative data of the OASIS datasets, in any case in whole or in part, that could be used to reconstruct a facial image. User shall report to OASIS immediately upon Users discovery of any unauthorized use or disclosure not permitted by this Data Use Agreement. User shall provide the following information: (1) the nature of the use or disclosure; (2) the information used or disclosed; (3) the identity of the persons and/or entities that made the use or disclosure; and (4) what corrective action will be taken by User as a result of the use or disclosure. User shall take any other reasonable actions available to it to mitigate any detrimental effects of the use or disclosure.\n4. User agrees to implement appropriate administrative, physical, and technical safeguards to protect the OASIS data from unauthorized access, use or disclosure. OASIS data must be stored on secure, access-controlled systems, and only the User authorized under this Data Use Agreement may access the data.\n5. OASIS data are provided for non-commercial, academic research purposes only. Any commercial use, including but not limited to the sale of data or commercial consulting, is strictly prohibited without explicit, prior written authorization from OASIS.\n6. User agrees to retain OASIS data only for as long as necessary to fulfill the research purposes described in Users application. Upon completion of the research or upon request by OASIS, User will securely destroy or return all copies of the data.\n7. User will acknowledge the use of OASIS data and data derived from OASIS data when publicly presenting any results or algorithms that benefitted from their use. Papers, book chapters, books, posters, oral presentations, and all other printed\nand digital presentations of results derived from OASIS data should contain the following: \n - Acknowledgments: Data were provided [in part] by OASIS [insert appropriate OASIS source info]\n (a) OASIS-1: Cross-Sectional: Principal Investigators: D. Marcus, R, Buckner, J, Csernansky J. Morris; P50 AG05681, P01 AG03991, P01 AG026276, R01 AG021910, P20 MH071616, U24 RR021382\n (b) OASIS-2: Longitudinal: Principal Investigators: D. Marcus, R, Buckner, J. Csernansky, J. Morris; P50 AG05681, P01 AG03991, P01 AG026276, R01 AG021910, P20 MH071616, U24 RR021382\n (c) OASIS-3: Longitudinal Multimodal Neuroimaging: Principal Investigators: T. Benzinger, D. Marcus, J. Morris; NIH P30 AG066444, P50 AG00561, P30 NS09857781, P01 AG026276, P01 AG003991, R01 AG043434, UL1 TR000448, R01 EB009352. AV-45 doses were provided by Avid Radiopharmaceuticals, a wholly owned subsidiary of Eli Lilly.\n (d) OASIS-3_AV1451: Principal Investigators: T. Benzinger, J. Morris; NIH P30 AG066444, AW00006993. AV-1451 doses were provided by Avid Radiopharmaceuticals, a wholly owned subsidiary of Eli Lilly.\n (e) OASIS-4: Clinical Cohort: Principal Investigators: T. Benzinger, L. Koenig, P. LaMontagne\n - Citation: The specific publications that are appropriate to cite in any given study will depend on what OASIS data were used and for what purposes. An annotated and current list of OASIS publications is available at http://www.oasis- brains.org.\n (a) OASIS-1: Cross-Sectional: https://doi.org/10.1162/jocn.2007.19.9.1498\n (b) OASIS-2: Longitudinal: https://doi.org/10.1162/jocn.2009.21407\n (c) OASIS-3: Longitudinal Multimodal Neuroimaging: https://doi.org/10.1101/2019.12.13.19014902\n (d) OASIS-4: Clinical Cohort: https://doi.org/10.1016/j.nicl.2020.102248\n - All proposed publications or presentations using Florbetapir F18 (AV45) or Flortaucipir F18 (AV1451) PET data must be submitted to Avid Radiopharmaceuticals for review and comment thirty days prior to such presentation or publication for review of intellectual property interests. See Imaging data dictionary for contact information and details.\n8. User agree to provide the Knight ADRC with information on Users use of OASIS data, upon request.\n9. Failure to abide by these data use terms may result in termination of your right to access and use OASIS data. In the event of breach of this Data Use Agreement, OASIS reserves the right to pursue all remedies available at law or in equity, including but not limited to termination of access, notification of the Users institution, and legal action.\n\nBraTS-GEN Data Use Agreement\n\nYou are free to use and/or refer to the BraTS datasets in your own research, provided that you always cite the flagship manuscript (published or pre-published) resulting from the challenge, as well as the following challenge-specific manuscripts:\n\nDataset:\n- Any dataset and/or Med-Perf client\n - Citations Needed\n \u2022 A. Karargyris, R. Umeton, M.J. Sheller, A. Aristizabal, J. George, A. Wuest, S. Pati, et al. \"Federated benchmarking of medical artificial intelligence with MedPerf\". Nature Machine Intelligence. 5:799810 (2023).\n \u2022 DOI: https://doi.org/10.1038/s42256-023-00652-2\n- BraTS-GLI\n - Citations Needed\n 1 U.Baid, et al., The RSNA-ASNR-MICCAI BraTS 2021 Benchmark on Brain Tumor Segmentation and Radiogenomic Classification, arXiv:2107.02314, 2021.\n 2 B. H. Menze, A. Jakab, S. Bauer, J. Kalpathy-Cramer, K. Farahani, J. Kirby, et al. \"The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS)\", IEEE Transactions on Medical Imaging 34(10), 1993-2024 (2015) DOI: https://doi.org/10.1109 TMI.2014.2377694\n 3 S. Bakas, H. Akbari, A. Sotiras, M. Bilello, M. Rozycki, J.S. Kirby, et al., \"Advancing The Cancer Genome Atlas glioma MRI collections with expert segmentation labels and radiomic features\", Nature Scientific Data, 4:170117 (2017) DOI: https://doi.org/10.1038/sdata.2017.117\n In addition, if there are no restrictions imposed from the journal/conference you submit your paper about citing \"Data Citations\", please be specific and also cite the following:\n 4 S. Bakas, H. Akbari, A. Sotiras, M. Bilello, M. Rozycki, J. Kirby, et al., \"Segmentation Labels and Radiomic Features for the Pre-operative Scans of the TCGA-GBM collection\", The Cancer Imaging Archive, 2017. DOI: https://doi.org/10.7937/K9/TCIA.2017.KLXWJJ1Q\n 5 S. Bakas, H. Akbari, A. Sotiras, M. Bilello, M. Rozycki, J. Kirby, et al., \"Segmentation Labels and Radiomic Features for the Pre-operative Scans of the TCGA-LGG collection\", The Cancer Imaging Archive, 2017. DOI: https://doi.org/10.7937/K9/TCIA.2017.GJQ7R0EF\n- BraTS-MEN\n - Citations Needed\n \u2022 arXiv: https://arxiv.org/abs/2305.07642\n \u2022 DOI: https://doi.org/10.48550/arXiv.2305.07642\n- BraTS-MET\n - Citations Needed\n \u2022 arXiv: https://arxiv.org/abs/2306.00838\n \u2022 DOI: https://doi.org/10.48550/arXiv.2306.00838\n- BraTS-PED\n - Citations Needed\n \u2022 arXiv: https://arxiv.org/abs/2305.17033\n \u2022 DOI: https://doi.org/10.48550/arXiv.2305.17033\n- BraTS-SSA\n - Citations Needed\n 1 Adewole M, Rudie JD, Gbadamosi A, et al. The Brain Tumor Segmentation (BraTS) Challenge 2023: Glioma Segmentation in Sub-Saharan Africa Patient Population (BraTS-Africa). arXiv:2305.19369 [eess.IV] (2023).\n \u2022 arXiv: https://arxiv.org/abs/2305.19369\n \u2022 DOI: https://doi.org/10.48550/arXiv.2305.19369\n \nNote: Challenge participants agree to cite the initial challenge pre publication manuscript (or the final publication manuscript). You will be contacted through your Synapse affiliated email when the manuscript has been released for citation. Note: Use of the BraTS datasets for creating and submitting benchmark results for publication on MLPerf.org is considered non-commercial use. It is further acceptable to republish results published on MLPerf.org, as well as to create unverified benchmark results consistent with the MLPerf.org rules in other locations. Please note that you should always adhere to the BraTS data usage guidelines and cite appropriately the aforementioned publications, as well as to the terms of use required by MLPerf.org.\n\nGSP Open Access Data Use Terms\n\nI request access to data collected as part of the Brain Genomics Superstruct Project (GSP) of Harvard University and the Massachusetts General Hospital, and I agree to the following:\n1. I will not attempt to establish the identity of or attempt to contact any of the included human subjects.\n2. I will not attempt to link any of the distributed data to any other data that might contain information about the included human subjects.\n3. I understand that under no circumstances will the code that would link these data to Protected Health Information be given to me, nor will any additional information about individual human subjects be released to me under these Open Access Data Use Terms.\n4. I will comply with all relevant rules and regulations imposed by my institution. This may mean that I need my research to be approved or declared exempt by a committee that oversees research on human subjects e.g., my Internal Review Board or Ethics Committee. Different committees operate under different national, state, and local laws and may interpret regulations differently, so it is important to ask about this.\n5. I may redistribute original GSP Open Access data and any derived data as long as the data are redistributed under these same Data Use Terms.\n6. I will acknowledge the use of GSP data and data derived from GSP data when publicly presenting any results or algorithms that benefitted from their use.\n (a) Papers, book chapters, books, posters, oral presentations, and all other printed and digital presentations of results derived from GSP data should contain the following wording in the acknowledgments section: Data were provided [in part] by the Brain Genomics Superstruct Project of Harvard University and the Massachusetts General Hospital, (Principal Investigators: Randy Buckner, Joshua Roffman, and Jordan Smoller), with support from the Center for Brain Science Neuroinformatics Research Group, the Athinoula A. Martinos Center for Biomedical Imaging, and the Center for Human Genetic Research. 20 individual investigators at Harvard and MGH generously contributed data to the overall project.\n (b) Authors of publications or presentations using GSP data should cite relevant publications describing the methods used by the GSP to acquire and process the data. The specific publications that are appropriate to cite in any given study will depend on what GSP data were used and for what purposes. An annotated and appropriately up-to-date list of publications that may warrant consideration is available at http://neuroinformatics.harvard.edu/gsp/\n (c) The GSP as a consortium should not be included as an author of publications or presentations if this authorship would be based solely on the use of GSP data.\n7. Failure to abide by these guidelines will result in termination of my privileges to access GSP data.\n\nHCP WU-Minn and Test-Retest Data Use Terms\n\nI request access to data collected by the Washington University - University of Minnesota Consortium of the Human Connectome Project (WU-Minn HCP), and I agree to the following:\n1. I will not attempt to establish the identity of or attempt to contact any of the included human subjects.\n2. I understand that under no circumstances will the code that would link these data to Protected Health Information be given to me, nor will any additional information about individual human subjects be released to me under these Open Access Data Use Terms.\n3. I will comply with all relevant rules and regulations imposed by my institution. This may mean that I need my research to be approved or declared exempt by a committee that oversees research on human subjects, e.g. my IRB or Ethics Committee. The released HCP data are not considered de-identified, insofar as certain combinations of HCP Restricted Data (available through a separate process) might allow identification of individuals. Different committees operate under different national, state and local laws and may interpret regulations differently, so it is important to ask about this. If needed and upon request, the HCP will provide a certificate stating that you have accepted the HCP Open Access Data Use Terms.\n4. I may redistribute original WU-Minn HCP Open Access data and any derived data as long as the data are redistributed under these same Data Use Terms.\n5. I will acknowledge the use of WU-Minn HCP data and data derived from WU-Minn HCP data when publicly presenting any results or algorithms that benefitted from their use.\n (a) Papers, book chapters, books, posters, oral presentations, and all other printed and digital presentations of results derived from HCP data should contain the following wording in the acknowledgments section: \"Data were provided [in part] by the Human Connectome Project, WU-Minn Consortium (Principal Investigators: David Van Essen and Kamil Ugurbil; 1U54MH091657) funded by the 16 NIH Institutes and Centers that support the NIH Blueprint for Neuroscience Research; and by the McDonnell Center for Systems Neuroscience at Washington University.\"\n (b) Authors of publications or presentations using WU-Minn HCP data should cite relevant publications describing the methods used by the HCP to acquire and process the data. The specific publications that are appropriate to cite in any given study will depend on what HCP data were used and for what purposes. An annotated and appropriately up-to-date list of publications that may warrant consideration is available at http://www.humanconnectome.org/about/acknowledgehcp.html\n (c) The WU-Minn HCP Consortium as a whole should not be included as an author of publications or presentations if this authorship would be based solely on the use of WU-Minn HCP data.\n6. Failure to abide by these guidelines will result in termination of my privileges to access WU-Minn HCP data.\n\nBy requesting access, you agree to the above terms.\n", "extra_gated_fields": {"I agree to these terms": "checkbox", "Name": "text", "Email": "text"}}
| false
|
auto
| 2026-01-25T09:25:23
| 69
| 65
| false
|
580083cd4f33b145d5ffdc57265915128e541ffe
|
FOMO300K: Brain MRI Dataset for Large-Scale Self-Supervised Learning with Clinical Data
Dataset paper preprint: A large-scale heterogeneous 3D magnetic resonance brain imaging dataset for self-supervised learning.
https://arxiv.org/pdf/2506.14432v2.
Description
FOMO-300K is a large-scale dataset of brain MRI scans, including both clinical and research-grade scans. The dataset includes a wide range of sequences, including T1, MPRAGE, T2, T2*, FLAIR, SWI, T1c, PD, DWI⦠See the full description on the dataset page: https://huggingface.co/datasets/FOMO-MRI/FOMO300K.
| 11,653
| 11,653
|
[
"task_categories:image-feature-extraction",
"task_categories:zero-shot-classification",
"license:other",
"size_categories:100K<n<1M",
"modality:3d",
"modality:image",
"arxiv:2506.14432",
"region:us",
"brain",
"mri",
"ssl",
"foundation_model",
"3d",
"image"
] | 2026-01-13T08:42:10
| null | null |
695df55a4e351abe5277cca5
|
UniParser/OmniScience
|
UniParser
|
{"license": "cc-by-nc-sa-4.0", "task_categories": ["image-to-text"], "extra_gated_heading": "Request Access to This Dataset", "extra_gated_description": "Please complete the required fields below to request access. Access will be automatically granted upon submission.", "extra_gated_fields": {"Full Name": {"type": "text"}, "Email": {"type": "text"}, "Affiliation (Company / University)": {"type": "text"}, "I agree this dataset is for non-commercial use ONLY": {"type": "checkbox"}}, "extra_gated_button_content": "Submit Access Request"}
| false
|
auto
| 2026-01-22T02:55:43
| 97
| 58
| false
|
9c9fdac9ea87b36e3889330463cd4aee2e81ce95
|
OmniScience: A Large-scale Dataset for Scientific Image Understanding
π 2026-01-21: The OmniScience dataset ranked Top 8 on Hugging Face Datasets Trending (Top 1 on Image Caption Filed). π 2026-01-17: The OmniScience dataset surpassed 5,000 downloads within 5 days of its release. π 2026-01-12: Official release of the OmniScience dataset. π 2025-06-01: Completion of the original dataset collection.
π Dataset Summary
OmniScience is an ultra-large-scale⦠See the full description on the dataset page: https://huggingface.co/datasets/UniParser/OmniScience.
| 8,446
| 8,453
|
[
"task_categories:image-to-text",
"license:cc-by-nc-sa-4.0",
"size_categories:1M<n<10M",
"format:parquet",
"format:optimized-parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:polars",
"library:mlcroissant",
"arxiv:2512.15098",
"region:us"
] | 2026-01-07T05:55:38
| null | null |
69645867fd167898fdec27e6
|
moonworks/lunara-aesthetic
|
moonworks
|
{"license": "apache-2.0", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "region", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "topic", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2953317713, "num_examples": 2000}], "download_size": 2970387971, "dataset_size": 2953317713}, "task_categories": ["text-to-image"], "tags": ["art"], "size_categories": ["1K<n<10K"]}
| false
|
False
| 2026-01-22T08:40:29
| 69
| 58
| false
|
fcf45a62e226560ae63e60eb01c4d40372457965
|
Dataset Card for Moonworks Lunara Aesthetic Dataset
Sample Images
Dataset Summary
paper: https://arxiv.org/abs/2601.07941
The Lunara Aesthetic Dataset is a curated collection of 2,000 high-quality imageβprompt pairs designed for controlled research on prompt grounding, style conditioning, and aesthetic alignment in text-to-image generation.
All images are generated using the Moonworks Lunara, a sub-10B parameter⦠See the full description on the dataset page: https://huggingface.co/datasets/moonworks/lunara-aesthetic.
| 3,916
| 3,916
|
[
"task_categories:text-to-image",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:polars",
"library:mlcroissant",
"arxiv:2601.07941",
"region:us",
"art"
] | 2026-01-12T02:11:51
| null | null |
696a53dfe8359277ca69b28a
|
rootsautomation/pubmed-ocr
|
rootsautomation
|
{"language": ["en"], "license": "other", "size_categories": ["1M<n<10M"], "task_categories": ["image-to-text", "image-text-to-text"], "pretty_name": "PubMed-OCR", "arxiv": 2601.11425, "dataset_info": {"features": [{"name": "basename", "dtype": "string"}, {"name": "page", "dtype": "int32"}, {"name": "license", "dtype": "string"}, {"name": "pmid", "dtype": "string"}, {"name": "accession_id", "dtype": "string"}, {"name": "article_citation", "dtype": "string"}, {"name": "pdf_bytes", "dtype": "binary"}, {"name": "ocr_json", "dtype": "string"}]}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "train-*.parquet"}]}], "license_name": "pubmed-ocr-multiple-cc-licenses", "tags": ["biology", "medical", "ocr", "multimodal"]}
| false
|
False
| 2026-01-22T19:58:29
| 57
| 56
| false
|
d03682f1b9e4d1c2a4d48657063cc467a464363d
|
PubMed-OCR: PMC Open Access OCR Annotations
PubMed-OCR is an OCR-centric corpus of scientific articles derived from PubMed Central Open Access PDFs. Each page is rendered to an image and annotated with Google Cloud Vision OCR, released in a compact JSON schema with word-, line-, and paragraph-level bounding boxes.
Scale (release):
209.5K articles
~1.5M pages
~1.3B words (OCR tokens)
This dataset is intended to support layout-aware modeling, coordinate-grounded QA, and evaluation⦠See the full description on the dataset page: https://huggingface.co/datasets/rootsautomation/pubmed-ocr.
| 1,718
| 1,718
|
[
"task_categories:image-to-text",
"task_categories:image-text-to-text",
"language:en",
"license:other",
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:polars",
"library:mlcroissant",
"arxiv:2601.11425",
"region:us",
"biology",
"medical",
"ocr",
"multimodal"
] | 2026-01-16T15:06:07
| null | null |
6938038933eda94c0094c844
|
raidium/RadImageNet-VQA
|
raidium
|
{"language": ["en"], "license": "apache-2.0", "size_categories": ["1K<n<10M"], "task_categories": ["visual-question-answering"], "tags": ["medical"], "pretty_name": "RadImageNet-VQA", "dataset_info": [{"config_name": "alignment", "features": [{"name": "image", "dtype": "image"}, {"name": "conversations", "list": [{"name": "from", "dtype": "string"}, {"name": "value", "dtype": "string"}]}, {"name": "metadata", "struct": [{"name": "content_type", "dtype": "string"}, {"name": "correct_text", "dtype": "null"}, {"name": "is_abnormal", "dtype": "bool"}, {"name": "location", "dtype": "string"}, {"name": "modality", "dtype": "string"}, {"name": "pathology", "dtype": "string"}, {"name": "question_id", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 29401649909, "num_examples": 750009}, {"name": "val", "num_bytes": 3175441830, "num_examples": 83668}], "download_size": 38405331105, "dataset_size": 32577091739}, {"config_name": "benchmark", "features": [{"name": "image", "dtype": "image"}, {"name": "question", "dtype": "string"}, {"name": "choices", "list": "string"}, {"name": "answer", "dtype": "string"}, {"name": "question_type", "dtype": "string"}, {"name": "metadata", "struct": [{"name": "content_type", "dtype": "string"}, {"name": "correct_text", "dtype": "string"}, {"name": "is_abnormal", "dtype": "bool"}, {"name": "location", "dtype": "string"}, {"name": "modality", "dtype": "string"}, {"name": "pathology", "dtype": "string"}, {"name": "question_id", "dtype": "string"}]}], "splits": [{"name": "test", "num_bytes": 414947216, "num_examples": 9000}], "download_size": 361133763, "dataset_size": 414947216}, {"config_name": "instruct", "features": [{"name": "image", "dtype": "image"}, {"name": "conversations", "list": [{"name": "from", "dtype": "string"}, {"name": "value", "dtype": "string"}]}, {"name": "metadata", "struct": [{"name": "content_type", "dtype": "string"}, {"name": "correct_text", "dtype": "string"}, {"name": "is_abnormal", "dtype": "bool"}, {"name": "location", "dtype": "string"}, {"name": "modality", "dtype": "string"}, {"name": "pathology", "dtype": "string"}, {"name": "question_id", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 29904541796, "num_examples": 750009}, {"name": "val", "num_bytes": 3231558586, "num_examples": 83668}], "download_size": 38424398344, "dataset_size": 33136100382}], "configs": [{"config_name": "alignment", "data_files": [{"split": "train", "path": "alignment/train-*"}, {"split": "val", "path": "alignment/val-*"}]}, {"config_name": "instruct", "data_files": [{"split": "train", "path": "instruct/train-*"}, {"split": "val", "path": "instruct/val-*"}]}, {"config_name": "benchmark", "data_files": [{"split": "test", "path": "benchmark/test-*"}]}], "extra_gated_prompt": "### RADIMAGENET LLC Dataset Research Use Agreement\n \n1. RadImageNet grants you permission, upon your agreeing to the terms of the Research Use Agreement, to view and use the Dataset for personal, non-commercial (e.g., academic) research purposes only. Any commercial use, sale, or other monetization, by you or your affiliates, is strictly prohibited under any and all circumstances.\n2. Other than any limited rights expressly granted herein to you, RadImageNet retains all rights, title, and interest in the Dataset.\n3. You may make a verbatim copy of the Dataset for non-commercial research use as permitted in the Research Use Agreement. You may not alter this verbatim copy for any reason. If another user within your organization wishes to use the Dataset, they must register as an individual user and comply with all the terms of the Research Use Agreement.\n4. YOU MAY NOT DISTRIBUTE, PUBLISH, OR REPRODUCE A COPY of any portion, including the entirety, of the Dataset to anyone without express and specific prior written permission from RadImageNet.\n5. YOU MAY NOT SHARE THE DOWNLOAD LINK to the Dataset with others. For example, if someone other than you within your organization wishes to use or view the Dataset, they must register as an individual user and agree to and comply with all the terms of the Research Use Agreement.\n6. You must not modify, reverse engineer, decompile, or create derivative works from the Dataset. You must not remove or alter any copyright or other proprietary notices in the Dataset.\n7. The Dataset has not been reviewed or approved by the Food and Drug Administration, or any other regulatory agency of the United States of America. The Dataset is being provided to you strictly and only for non-clinical, research use. In no event shall data or images generated through the use, directly or indirectly, in whole or in part, of the Dataset be used or relied upon in the diagnosis or provision of patient care. This Research Use Agreement expressly forbids the use, directly or indirectly, in whole or in part, of the Dataset in the diagnosis or provision of patient care.\n8. THE DATASET IS PROVIDED \u201cAS IS,\u201d AND RADIMAGENET AND ITS COLLABORATORS MAKE NO WARRANTY, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO WARRANTIES OF MERCHANTABILITY AND FITNESS FOR ANY PARTICULAR PURPOSE,2 NOR DO THEY ASSUME ANY LIABILITY OR RESPONSIBILITY FOR THE USE OF THE DATASET.\n9. You will not attempt to identify or re-identify any of the individual data subjects (e.g., patients). Identification or re-identification of individuals is strictly prohibited. Any identification or re-identification of any individual data subject shall be immediately reported to RadImageNet and may be subject to immediate termination of the use of the Dataset.\n\n10. Any violation of the Research Use Agreement or other impermissible use shall be grounds for immediate termination of use of the Dataset. It is your duty to promptly report to RadImageNet any knowledge of any violation at any time. In the event that RadImageNet determines that you have violated this Research Use Agreement or made other impermissible use of the Dataset, RadImageNet may direct that you immediately return all copies of the Dataset and retain no copies thereof. RadImageNet may do this even if you did not cause the violation or impermissible use.\n\nIn consideration for your agreement to the terms and conditions contained in the Research Use Agreement, RadImageNet grants you limited permission to view and use the Dataset for personal, non-commercial research, as described herein. You may not otherwise copy, reproduce, retransmit, distribute, publish, commercially exploit or otherwise transfer any material from or related to the Dataset.\n#### Limitation of Use\nYou may use the Dataset for legal purposes only.\n#### Indemnification\nYou agree to indemnify and hold RadImageNet harmless from and not liable in any way for any claims, losses or damages, including legal fees, arising out of or resulting from your use of the Dataset or your violation or role in violation of the Research Use Agreement. You agree to fully cooperate in RadImageNet\u2019s defense against any such claims. These terms and all other terms of the Research Use Agreement shall be governed by and interpreted in accordance with the laws of New York State.", "extra_gated_fields": {"Name": "text", "Title": "text", "Date": "date_picker", "By clicking Submit below I accept the terms of this RADIMAGENET LLC Dataset Research Use Agreement (hereinafter \u201cthe Research Use Agreement\u201d), as well as to the Terms of Use of the RADIMAGENET LLC (hereinafter \u201cRadImageNet\u201d) website as posted and updated periodically": "checkbox"}, "extra_gated_button_content": "Submit"}
| false
|
auto
| 2025-12-19T10:06:57
| 54
| 51
| false
|
fe2154107adfd74f5b8218be6d2b3b127b668d32
|
RadImageNet-VQA: A Large-Scale CT and MRI Dataset for Radiologic Visual Question Answering
π Paper
Dataset Details
We introduce RadImageNet-VQA, a large-scale dataset designed for training and benchmarking radiologic VQA on CT and MRI exams. Built from the CT/MRI subset of RadImageNet and its expert-curated anatomical and pathological annotations, RadImageNet-VQA provides 750K images with 7.5M generated samples, including 750K medical captions for visual-text⦠See the full description on the dataset page: https://huggingface.co/datasets/raidium/RadImageNet-VQA.
| 1,425
| 1,597
|
[
"task_categories:visual-question-answering",
"language:en",
"license:apache-2.0",
"size_categories:1M<n<10M",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"medical"
] | 2025-12-09T11:10:01
| null | null |
67a404bc8c6d42c5ec097433
|
Anthropic/EconomicIndex
|
Anthropic
|
{"language": "en", "pretty_name": "EconomicIndex", "tags": ["AI", "LLM", "Economic Impacts", "Anthropic"], "viewer": true, "license": "mit", "configs": [{"config_name": "release_2026_01_15", "data_files": [{"split": "raw_claude_ai", "path": "release_2026_01_15/data/intermediate/aei_raw_claude_ai_2025-11-13_to_2025-11-20.csv"}, {"split": "raw_1p_api", "path": "release_2025_09_15/data/intermediate/aei_raw_1p_api_2025-11-13_to_2025-11-20.csv"}]}]}
| false
|
False
| 2026-01-15T23:52:53
| 437
| 48
| false
|
f7f2edfbbcf28329dd621fc8e3cc83d0d99b72eb
|
The Anthropic Economic Index
Overview
The Anthropic Economic Index provides insights into how AI is being incorporated into real-world tasks across the modern economy.
Data Releases
This repository contains multiple data releases, each with its own documentation:
2026-01-15 Release: Updated analysis with economic primitives and Sonnet 4.5
2025-09-15 Release: Updated analysis with geographic and first-party API data using Sonnet 4
2025-03-27 Release: Updated⦠See the full description on the dataset page: https://huggingface.co/datasets/Anthropic/EconomicIndex.
| 6,484
| 40,529
|
[
"language:en",
"license:mit",
"arxiv:2503.04761",
"region:us",
"AI",
"LLM",
"Economic Impacts",
"Anthropic"
] | 2025-02-06T00:39:24
| null | null |
67fce65dd1ec7d15ba6a2da3
|
zwhe99/DeepMath-103K
|
zwhe99
|
{"license": "mit", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "final_answer", "dtype": "string"}, {"name": "difficulty", "dtype": "float64"}, {"name": "topic", "dtype": "string"}, {"name": "r1_solution_1", "dtype": "string"}, {"name": "r1_solution_2", "dtype": "string"}, {"name": "r1_solution_3", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4959744761.05883, "num_examples": 103022}], "download_size": 2136106260, "dataset_size": 4959744761.05883}, "task_categories": ["text-generation", "text2text-generation"], "language": ["en"], "tags": ["math", "reasoning", "rl"], "pretty_name": "deepmath-103k", "size_categories": ["100K<n<1M"]}
| false
|
False
| 2025-05-29T03:37:07
| 336
| 46
| false
|
5cf055d1fe3d7a2eb19719ac020211469736ae44
|
DeepMath-103K
π₯ News
May 8, 2025: We found that 48 samples contained hints that revealed the answers. The relevant questions have now been revised to remove the leaked answers.
April 14, 2025: We release DeepMath-103K, a large-scale dataset featuring challenging, verifiable, and decontaminated math problems tailored for RL and SFT. We open source:β¦ See the full description on the dataset page: https://huggingface.co/datasets/zwhe99/DeepMath-103K.
| 6,634
| 90,431
|
[
"task_categories:text-generation",
"language:en",
"license:mit",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2504.11456",
"region:us",
"math",
"reasoning",
"rl"
] | 2025-04-14T10:41:33
| null | null |
6967b2da7b115954f1c9327c
|
mercor/apex-agents
|
mercor
|
{"license": "cc-by-4.0", "language": ["en"], "tags": ["agents", "benchmarking", "finance", "legal", "management-consulting", "tool-use", "long-horizon"], "pretty_name": "apex-agents", "size_categories": ["n<1K"]}
| false
|
False
| 2026-01-22T00:33:03
| 45
| 45
| false
|
602aae289ba9f4b74c27635e6f3a1738b000e5be
|
APEXβAgents
APEXβAgents is a benchmark from Mercor for evaluating whether AI agents can execute long-horizon, cross-application professional services tasks. Tasks were created by investment banking analysts, management consultants, and corporate lawyers, and require agents to navigate realistic work environments with files and tools (e.g., docs, spreadsheets, PDFs, email, chat, calendar).
Tasks: 480 total (160 per job category)
Worlds: 33 total (10 banking, 11 consulting, 12 law)β¦ See the full description on the dataset page: https://huggingface.co/datasets/mercor/apex-agents.
| 721
| 721
|
[
"language:en",
"license:cc-by-4.0",
"size_categories:n<1K",
"arxiv:2601.14242",
"region:us",
"agents",
"benchmarking",
"finance",
"legal",
"management-consulting",
"tool-use",
"long-horizon"
] | 2026-01-14T15:14:34
| null | null |
6965b354f2c297a7078582d4
|
Qwen/DeepPlanning
|
Qwen
|
{"language": ["en", "zh"], "license": "apache-2.0", "viewer": false, "task_categories": ["text-generation"], "tags": ["planning", "llm-benchmark", "reasoning", "autonomous-agents"], "pretty_name": "DeepPlanning", "size_categories": ["1k<n<10k"]}
| false
|
False
| 2026-01-27T05:22:17
| 45
| 41
| false
|
4769c4974f6a2ac026a725a9e99320727454ead8
|
DeepPlanning: Benchmarking Long-Horizon Agentic Planning with Verifiable Constraints
DeepPlanningBench is a challenging benchmark for evaluating long-horizon agentic planning capabilities of large language models (LLMs) with verifiable constraints. It features realistic multi-day travel planning and multi-product shopping tasks that require proactive information acquisition, local constrained reasoning, and global constrained optimization.
π Website:β¦ See the full description on the dataset page: https://huggingface.co/datasets/Qwen/DeepPlanning.
| 22
| 22
|
[
"task_categories:text-generation",
"language:en",
"language:zh",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:webdataset",
"modality:text",
"library:datasets",
"library:webdataset",
"library:mlcroissant",
"arxiv:2601.18137",
"region:us",
"planning",
"llm-benchmark",
"reasoning",
"autonomous-agents"
] | 2026-01-13T02:52:04
| null | null |
End of preview. Expand
in Data Studio
Changelog
NEW Changes July 25th
- added
baseModelsfield to models which shows the models that the user tagged as base models for that model
Example:
{
"models": [
{
"_id": "687de260234339fed21e768a",
"id": "Qwen/Qwen3-235B-A22B-Instruct-2507"
}
],
"relation": "quantized"
}
NEW Changes July 9th
- Fixed issue with
ggufcolumn with integer overflow causing import pipeline to be broken over a few weeks β
NEW Changes Feb 27th
Added new fields on the
modelssplit:downloadsAllTime,safetensors,ggufAdded new field on the
datasetssplit:downloadsAllTimeAdded new split:
paperswhich is all of the Daily Papers
Updated Daily
- Downloads last month
- 4,118
Size of downloaded dataset files:
1.76 GB
Size of the auto-converted Parquet files:
1.76 GB
Number of rows:
4,256,635