Datasets:

Modalities:
Text
Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
Dask

Add link to Github repository

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +19 -14
README.md CHANGED
@@ -1,33 +1,38 @@
1
  ---
2
- task_categories:
3
- - text-generation
4
  language:
5
  - en
6
- pretty_name: Creative Commons YouTube
 
 
 
 
 
 
 
 
7
  ---
8
 
9
- # Creative Commons YouTube
10
 
11
  ## Description
12
- YouTube is large-scale video-sharing platform where users have the option of uploading content under a CC BY license.
13
- To collect high-quality speech-based textual content and combat the rampant license laundering on YouTube, we manually curated a set of over 2,000 YouTube channels that consistently release original openly licensed content containing speech.
14
- The resulting collection spans a wide range of genres, including lectures, tutorials, reviews, video essays, speeches, and vlogs.
15
- From these channels, we retrieved over 1.1 million openly licensed videos comprising more than 470,000 hours content.
16
- Finally, each video was transcribed to text using the [Whisper speech recognition model](https://github.com/openai/whisper).
17
- Code for collecting, processing, and preparing this dataset is available [here](https://github.com/nkandpa2/youtube-commons).
18
-
19
 
20
  ## Dataset Statistics
21
  | Documents | UTF-8 GB |
22
  |-----------|----------|
23
- | 1,129,692 | 21.5 |
24
 
25
  ## License Issues
26
  While we aim to produce datasets with completely accurate licensing information, license laundering and inaccurate metadata can cause us to erroneously assign the incorrect license to some documents (for further discussion of this limitation, please see [our paper](https://huggingface.co/papers/2506.05209)). If you believe you have found an instance of incorrect licensing in this dataset, please [start a discussion](https://github.com/r-three/common-pile/discussions/new) on this repository.
27
 
28
  ## Other Versions
29
- This is the "raw" version of the Creative Commons YouTube dataset.
30
- If you are looking for the filtered version used to train [Comma v0.1](https://huggingface.co/common-pile/comma-v0.1), you can find it [here](https://huggingface.co/datasets/common-pile/youtube_filtered).
31
 
32
  ## Citation
33
  If you use this dataset, please cite:
 
1
  ---
 
 
2
  language:
3
  - en
4
+ task_categories:
5
+ - text-generation
6
+ pretty_name: Python Enhancement Proposals
7
+ configs:
8
+ - config_name: default
9
+ data_files:
10
+ - split: train
11
+ path:
12
+ - v0/documents/*.jsonl.gz
13
  ---
14
 
15
+ # Python Enhancement Proposals
16
 
17
  ## Description
18
+ Python Enhancement Proposals, or PEPs, are design documents that generally provide a technical specification and rationale for new features of the Python programming language.
19
+ There are been 661 PEPs published.
20
+ The majority of PEPs are published in the Public Domain, but 5 were published under the “Open Publication License” and omitted from this dataset.
21
+ PEPs are long, highly-polished, and technical in nature and often include code examples paired with their prose.
22
+ PEPs are authored in ReStructured Text; we used [pandoc](https://pandoc.org/) to convert them to plain text.
23
+ This dataset is part of [The Common Pile v0.1: An 8TB Dataset of Public Domain and Openly Licensed Text](https://huggingface.co/papers/2506.05209).
 
24
 
25
  ## Dataset Statistics
26
  | Documents | UTF-8 GB |
27
  |-----------|----------|
28
+ | 656 | 0.01 |
29
 
30
  ## License Issues
31
  While we aim to produce datasets with completely accurate licensing information, license laundering and inaccurate metadata can cause us to erroneously assign the incorrect license to some documents (for further discussion of this limitation, please see [our paper](https://huggingface.co/papers/2506.05209)). If you believe you have found an instance of incorrect licensing in this dataset, please [start a discussion](https://github.com/r-three/common-pile/discussions/new) on this repository.
32
 
33
  ## Other Versions
34
+ This is the "raw" version of the Python Enhancement Proposals dataset.
35
+ If you are looking for the filtered version used to train [Comma v0.1](https://huggingface.co/common-pile/comma-v0.1), you can find it [here](https://huggingface.co/datasets/common-pile/python_enhancement_proposals_filtered).
36
 
37
  ## Citation
38
  If you use this dataset, please cite: