Dataset Viewer
The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code:   ConfigNamesError
Exception:    OSError
Message:      Consistency check failed: file should be of size 6077 but has size 6115 (README.md).
This is usually due to network issues while downloading the file. Please retry with `force_download=True`.
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
                  config_names = get_dataset_config_names(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 165, in get_dataset_config_names
                  dataset_module = dataset_module_factory(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1663, in dataset_module_factory
                  raise e1 from None
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1533, in dataset_module_factory
                  dataset_readme_path = api.hf_hub_download(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
                  return fn(*args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_api.py", line 5218, in hf_hub_download
                  return hf_hub_download(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
                  return fn(*args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/file_download.py", line 860, in hf_hub_download
                  return _hf_hub_download_to_cache_dir(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/file_download.py", line 1009, in _hf_hub_download_to_cache_dir
                  _download_to_tmp_and_move(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/file_download.py", line 1543, in _download_to_tmp_and_move
                  http_get(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/file_download.py", line 481, in http_get
                  raise EnvironmentError(
              OSError: Consistency check failed: file should be of size 6077 but has size 6115 (README.md).
              This is usually due to network issues while downloading the file. Please retry with `force_download=True`.

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Bittensor Subnet 13 Reddit Dataset

Data-universe: The finest collection of social media data the web has to offer
Data-universe: The finest collection of social media data the web has to offer

Dataset Summary

This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed Reddit data. The data is continuously updated by network miners, providing a real-time stream of Reddit content for various analytical and machine learning tasks. For more information about the dataset, please visit the official repository.

Supported Tasks

The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs. For example:

  • Sentiment Analysis
  • Topic Modeling
  • Community Analysis
  • Content Categorization

Languages

Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation.

Dataset Structure

Data Instances

Each instance represents a single Reddit post or comment with the following fields:

Data Fields

  • text (string): The main content of the Reddit post or comment.
  • label (string): Sentiment or topic category of the content.
  • dataType (string): Indicates whether the entry is a post or a comment.
  • communityName (string): The name of the subreddit where the content was posted.
  • datetime (string): The date when the content was posted or commented.
  • username_encoded (string): An encoded version of the username to maintain user privacy.
  • url_encoded (string): An encoded version of any URLs included in the content.

Data Splits

This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp.

Dataset Creation

Source Data

Data is collected from public posts and comments on Reddit, adhering to the platform's terms of service and API usage guidelines.

Personal and Sensitive Information

All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information.

Considerations for Using the Data

Social Impact and Biases

Users should be aware of potential biases inherent in Reddit data, including demographic and content biases. This dataset reflects the content and opinions expressed on Reddit and should not be considered a representative sample of the general population.

Limitations

  • Data quality may vary due to the nature of media sources.
  • The dataset may contain noise, spam, or irrelevant content typical of social media platforms.
  • Temporal biases may exist due to real-time collection methods.
  • The dataset is limited to public subreddits and does not include private or restricted communities.

Additional Information

Licensing Information

The dataset is released under the MIT license. The use of this dataset is also subject to Reddit Terms of Use.

Citation Information

If you use this dataset in your research, please cite it as follows:

@misc{Jacksss1232025datauniversereddit_dataset_44,
        title={The Data Universe Datasets: The finest collection of social media data the web has to offer},
        author={Jacksss123},
        year={2025},
        url={https://huggingface.co/datasets/Jacksss123/reddit_dataset_44},
        }

Contributions

To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms.

Dataset Statistics

[This section is automatically updated]

  • Total Instances: 143327
  • Date Range: 2025-03-04T00:00:00Z to 2025-03-17T00:00:00Z
  • Last Updated: 2025-04-04T22:18:56Z

Data Distribution

  • Posts: 4.42%
  • Comments: 95.58%

Top 10 Subreddits

For full statistics, please refer to the stats.json file in the repository.

Rank Topic Total Count Percentage
1 r/CryptoCurrency 39236 27.38%
2 r/Bitcoin 39047 27.24%
3 r/trump 29045 20.26%
4 r/Audi 15358 10.72%
5 r/Toyota 5675 3.96%
6 r/devops 3102 2.16%
7 r/kubernetes 2012 1.40%
8 r/AskReddit 793 0.55%
9 r/bittensor_ 359 0.25%
10 r/crypto 234 0.16%

Update History

Date New Instances Total Instances
2025-03-06T16:56:23Z 14012 14012
2025-03-07T10:20:38Z 11765 25777
2025-03-08T03:54:05Z 11034 36811
2025-03-08T03:54:09Z 11034 47845
2025-03-08T21:54:28Z 8505 56350
2025-03-09T15:54:46Z 8035 64385
2025-03-10T16:07:42Z 9768 74153
2025-03-11T10:07:59Z 13425 87578
2025-03-12T04:35:53Z 12734 100312
2025-03-12T22:36:20Z 9148 109460
2025-03-13T16:36:40Z 7213 116673
2025-03-14T10:36:59Z 8240 124913
2025-03-15T04:33:02Z 8057 132970
2025-03-15T22:33:50Z 1 132971
2025-03-16T16:34:05Z 1 132972
2025-03-17T23:13:32Z 10354 143326
2025-04-04T22:18:56Z 1 143327
Downloads last month
43