Update README.md
Browse files
README.md
CHANGED
|
@@ -1,161 +1,137 @@
|
|
| 1 |
-
# Dataset Card
|
| 2 |
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
- **Point of Contact:** [Lewis Tunstall](lewis@huggingface.co)
|
| 6 |
-
|
| 7 |
-
### Dataset Summary
|
| 8 |
-
|
| 9 |
-
GitHub Issues is a dataset consisting of GitHub issues and pull requests associated with the 🤗 Datasets [repository](https://github.com/huggingface/datasets). It is intended for educational purposes and can be used for semantic search or multilabel text classification. The contents of each GitHub issue are in English and concern the domain of datasets for NLP, computer vision, and beyond.
|
| 10 |
-
|
| 11 |
-
### Supported Tasks and Leaderboards
|
| 12 |
-
|
| 13 |
-
For each of the tasks tagged for this dataset, give a brief description of the tag, metrics, and suggested models (with a link to their HuggingFace implementation if available). Give a similar description of tasks that were not covered by the structured tag set (repace the `task-category-tag` with an appropriate `other:other-task-name`).
|
| 14 |
-
|
| 15 |
-
- `task-category-tag`: The dataset can be used to train a model for [TASK NAME], which consists in [TASK DESCRIPTION]. Success on this task is typically measured by achieving a *high/low* [metric name](https://huggingface.co/metrics/metric_name). The ([model name](https://huggingface.co/model_name) or [model class](https://huggingface.co/transformers/model_doc/model_class.html)) model currently achieves the following score. *[IF A LEADERBOARD IS AVAILABLE]:* This task has an active leaderboard which can be found at [leaderboard url]() and ranks models based on [metric name](https://huggingface.co/metrics/metric_name) while also reporting [other metric name](https://huggingface.co/metrics/other_metric_name).
|
| 16 |
-
|
| 17 |
-
### Languages
|
| 18 |
-
|
| 19 |
-
Provide a brief overview of the languages represented in the dataset. Describe relevant details about specifics of the language such as whether it is social media text, African American English,...
|
| 20 |
-
|
| 21 |
-
When relevant, please provide [BCP-47 codes](https://tools.ietf.org/html/bcp47), which consist of a [primary language subtag](https://tools.ietf.org/html/bcp47#section-2.2.1), with a [script subtag](https://tools.ietf.org/html/bcp47#section-2.2.3) and/or [region subtag](https://tools.ietf.org/html/bcp47#section-2.2.4) if available.
|
| 22 |
-
|
| 23 |
-
## Dataset Structure
|
| 24 |
-
|
| 25 |
-
### Data Instances
|
| 26 |
-
|
| 27 |
-
Provide an JSON-formatted example and brief description of a typical instance in the dataset. If available, provide a link to further examples.
|
| 28 |
-
|
| 29 |
-
```
|
| 30 |
-
{
|
| 31 |
-
'example_field': ...,
|
| 32 |
-
...
|
| 33 |
-
}
|
| 34 |
-
```
|
| 35 |
-
|
| 36 |
-
Provide any additional information that is not covered in the other sections about the data here. In particular describe any relationships between data points and if these relationships are made explicit.
|
| 37 |
-
|
| 38 |
-
### Data Fields
|
| 39 |
|
| 40 |
-
|
| 41 |
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
Note that the descriptions can be initialized with the **Show Markdown Data Fields** output of the [tagging app](https://github.com/huggingface/datasets-tagging), you will then only need to refine the generated descriptions.
|
| 45 |
-
|
| 46 |
-
### Data Splits
|
| 47 |
-
|
| 48 |
-
Describe and name the splits in the dataset if there are more than one.
|
| 49 |
-
|
| 50 |
-
Describe any criteria for splitting the data, if used. If their are differences between the splits (e.g. if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here.
|
| 51 |
-
|
| 52 |
-
Provide the sizes of each split. As appropriate, provide any descriptive statistics for the features, such as average length. For example:
|
| 53 |
-
|
| 54 |
-
| | Tain | Valid | Test |
|
| 55 |
-
| ----- | ------ | ----- | ---- |
|
| 56 |
-
| Input Sentences | | | |
|
| 57 |
-
| Average Sentence Length | | | |
|
| 58 |
-
|
| 59 |
-
## Dataset Creation
|
| 60 |
-
|
| 61 |
-
### Curation Rationale
|
| 62 |
-
|
| 63 |
-
What need motivated the creation of this dataset? What are some of the reasons underlying the major choices involved in putting it together?
|
| 64 |
-
|
| 65 |
-
### Source Data
|
| 66 |
-
|
| 67 |
-
This section describes the source data (e.g. news text and headlines, social media posts, translated sentences,...)
|
| 68 |
-
|
| 69 |
-
#### Initial Data Collection and Normalization
|
| 70 |
-
|
| 71 |
-
Describe the data collection process. Describe any criteria for data selection or filtering. List any key words or search terms used. If possible, include runtime information for the collection process.
|
| 72 |
-
|
| 73 |
-
If data was collected from other pre-existing datasets, link to source here and to their [Hugging Face version](https://huggingface.co/datasets/dataset_name).
|
| 74 |
-
|
| 75 |
-
If the data was modified or normalized after being collected (e.g. if the data is word-tokenized), describe the process and the tools used.
|
| 76 |
|
| 77 |
-
|
| 78 |
|
| 79 |
-
|
|
|
|
| 80 |
|
| 81 |
-
|
| 82 |
|
| 83 |
-
|
| 84 |
|
| 85 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 86 |
|
| 87 |
-
|
| 88 |
|
| 89 |
-
|
| 90 |
|
| 91 |
-
|
|
|
|
|
|
|
|
|
|
| 92 |
|
| 93 |
-
|
| 94 |
|
| 95 |
-
|
| 96 |
|
| 97 |
-
|
| 98 |
|
| 99 |
-
|
| 100 |
|
| 101 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 102 |
|
| 103 |
-
|
|
|
|
|
|
|
|
|
|
| 104 |
|
| 105 |
-
|
| 106 |
|
| 107 |
-
|
| 108 |
|
| 109 |
-
|
| 110 |
|
| 111 |
-
|
|
|
|
|
|
|
| 112 |
|
| 113 |
-
|
|
|
|
|
|
|
|
|
|
| 114 |
|
| 115 |
-
|
| 116 |
|
| 117 |
-
|
| 118 |
|
| 119 |
-
The
|
| 120 |
|
| 121 |
-
|
|
|
|
|
|
|
|
|
|
| 122 |
|
| 123 |
-
|
|
|
|
|
|
|
| 124 |
|
| 125 |
-
|
| 126 |
|
| 127 |
-
|
| 128 |
|
| 129 |
-
|
| 130 |
|
| 131 |
-
|
|
|
|
|
|
|
|
|
|
| 132 |
|
| 133 |
-
|
|
|
|
|
|
|
|
|
|
| 134 |
|
| 135 |
-
|
| 136 |
|
| 137 |
-
|
| 138 |
|
| 139 |
-
|
| 140 |
|
| 141 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 142 |
|
| 143 |
-
|
| 144 |
|
| 145 |
-
|
| 146 |
|
| 147 |
-
|
| 148 |
-
```
|
| 149 |
-
@article{article_id,
|
| 150 |
-
author = {Author List},
|
| 151 |
-
title = {Dataset Paper Title},
|
| 152 |
-
journal = {Publication Venue},
|
| 153 |
-
year = {2525}
|
| 154 |
-
}
|
| 155 |
-
```
|
| 156 |
|
| 157 |
-
|
| 158 |
|
| 159 |
-
|
| 160 |
|
| 161 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# GitHub Issues Dataset Card
|
| 2 |
|
| 3 |
+
**Author / Maintainer:** @xanderIV
|
| 4 |
+
**Point of Contact:** @xanderIV
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 5 |
|
| 6 |
+
---
|
| 7 |
|
| 8 |
+
## Dataset Description
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 9 |
|
| 10 |
+
### Dataset Summary
|
| 11 |
|
| 12 |
+
GitHub Issues is a dataset consisting of GitHub issues and pull requests associated with the 🤗 Datasets repository:
|
| 13 |
+
https://github.com/huggingface/datasets
|
| 14 |
|
| 15 |
+
This dataset is curated and documented by **@xanderIV** for educational and research purposes.
|
| 16 |
|
| 17 |
+
It can be used for:
|
| 18 |
|
| 19 |
+
- Semantic search
|
| 20 |
+
- Multilabel text classification
|
| 21 |
+
- Automated issue triaging
|
| 22 |
+
- Topic modeling
|
| 23 |
+
- LLM fine-tuning
|
| 24 |
+
- Retrieval-Augmented Generation (RAG) experiments
|
| 25 |
|
| 26 |
+
The dataset contains English-language technical discussions related to NLP, computer vision, speech, multimodal ML systems, and ML infrastructure.
|
| 27 |
|
| 28 |
+
This dataset is particularly relevant for:
|
| 29 |
|
| 30 |
+
- Developer assistant systems
|
| 31 |
+
- LLM-powered support automation
|
| 32 |
+
- DevOps / MLOps / LLMOps workflows
|
| 33 |
+
- Research in applied ML systems
|
| 34 |
|
| 35 |
+
---
|
| 36 |
|
| 37 |
+
## Supported Tasks and Leaderboards
|
| 38 |
|
| 39 |
+
### 1. Text Classification (`text-classification`)
|
| 40 |
|
| 41 |
+
The dataset can be used for **multilabel text classification**, where a model predicts one or more labels (e.g., `bug`, `enhancement`, `documentation`) for each issue.
|
| 42 |
|
| 43 |
+
**Typical metrics:**
|
| 44 |
+
- F1 score
|
| 45 |
+
- Accuracy
|
| 46 |
+
- Precision
|
| 47 |
+
- Recall
|
| 48 |
|
| 49 |
+
**Suggested models:**
|
| 50 |
+
- `distilbert-base-uncased`
|
| 51 |
+
- `roberta-base`
|
| 52 |
+
- `microsoft/deberta-v3-base`
|
| 53 |
|
| 54 |
+
---
|
| 55 |
|
| 56 |
+
### 2. Information Retrieval (`information-retrieval`)
|
| 57 |
|
| 58 |
+
The dataset can be used for **semantic search**, where the task is to retrieve the most relevant GitHub issue given a user query.
|
| 59 |
|
| 60 |
+
**Typical metrics:**
|
| 61 |
+
- MRR (Mean Reciprocal Rank)
|
| 62 |
+
- Recall@k
|
| 63 |
|
| 64 |
+
**Suggested models:**
|
| 65 |
+
- `sentence-transformers/all-MiniLM-L6-v2`
|
| 66 |
+
- `BAAI/bge-base-en-v1.5`
|
| 67 |
+
- `intfloat/e5-base`
|
| 68 |
|
| 69 |
+
---
|
| 70 |
|
| 71 |
+
### 3. Issue Triaging (`other:issue-triaging`)
|
| 72 |
|
| 73 |
+
The dataset can be used for automated issue routing and classification.
|
| 74 |
|
| 75 |
+
The task consists of:
|
| 76 |
+
- Predicting labels
|
| 77 |
+
- Suggesting maintainers
|
| 78 |
+
- Routing issues to appropriate teams
|
| 79 |
|
| 80 |
+
**Metrics:**
|
| 81 |
+
- Classification accuracy
|
| 82 |
+
- Routing precision
|
| 83 |
|
| 84 |
+
---
|
| 85 |
|
| 86 |
+
### 4. LLM Fine-Tuning (`other:llm-finetuning`)
|
| 87 |
|
| 88 |
+
The dataset can be used to fine-tune large language models for:
|
| 89 |
|
| 90 |
+
- Developer assistants
|
| 91 |
+
- Issue summarization
|
| 92 |
+
- Pull request review generation
|
| 93 |
+
- Support automation
|
| 94 |
|
| 95 |
+
**Suggested models:**
|
| 96 |
+
- `meta-llama/Llama-3-8B-Instruct`
|
| 97 |
+
- `mistralai/Mistral-7B-Instruct-v0.2`
|
| 98 |
+
- `Qwen/Qwen2.5-7B-Instruct`
|
| 99 |
|
| 100 |
+
---
|
| 101 |
|
| 102 |
+
## Languages
|
| 103 |
|
| 104 |
+
- Primary language: English (`en`, BCP-47)
|
| 105 |
|
| 106 |
+
Text characteristics:
|
| 107 |
+
- Technical discussions
|
| 108 |
+
- Developer communication
|
| 109 |
+
- Bug reports
|
| 110 |
+
- Code snippets (Python, YAML, JSON, etc.)
|
| 111 |
+
- Configuration files
|
| 112 |
|
| 113 |
+
Language style: semi-formal, domain-specific (software engineering / ML infrastructure).
|
| 114 |
|
| 115 |
+
---
|
| 116 |
|
| 117 |
+
## Dataset Structure
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 118 |
|
| 119 |
+
### Data Instances
|
| 120 |
|
| 121 |
+
Example instance:
|
| 122 |
|
| 123 |
+
```json
|
| 124 |
+
{
|
| 125 |
+
"issue_id": 12345,
|
| 126 |
+
"title": "Dataset loading fails with streaming=True",
|
| 127 |
+
"body": "When trying to load the dataset with streaming enabled...",
|
| 128 |
+
"labels": ["bug", "datasets"],
|
| 129 |
+
"author": "username",
|
| 130 |
+
"created_at": "2023-06-10T14:32:00Z",
|
| 131 |
+
"comments": [
|
| 132 |
+
{
|
| 133 |
+
"author": "maintainer",
|
| 134 |
+
"text": "Can you provide the stack trace?"
|
| 135 |
+
}
|
| 136 |
+
]
|
| 137 |
+
}
|