File size: 6,503 Bytes
b5826d2 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 | ---
license: apache-2.0
language:
- en
pretty_name: "StackPulse-QA: Instruction-Tuning Q&A Pairs from Stack Overflow"
size_categories:
- 100K<n<1M
task_categories:
- question-answering
- text-generation
- text2text-generation
tags:
- stackoverflow
- instruction-tuning
- qa
- code
- fine-tuning
- alpaca-format
- llm-training
---
# π§© StackPulse-QA: Instruction-Tuning Q&A Pairs from Stack Overflow
## Dataset Summary
Instruction-tuning Q&A dataset built from [Omarrran/StackPulse_778K_QnA_Code_dataset](https://huggingface.co/datasets/Omarrran/StackPulse_778K_QnA_Code_dataset) by joining question IDs with **BigQuery `bigquery-public-data.stackoverflow.posts_answers`** on `accepted_answer_id`.
Each sample consists of:
- `input_text_instruct` β A question (title + body) prefixed with an instruction
- `output_text` β The **accepted answer** from Stack Overflow
Format mirrors the instruction-tuning dataset from DeepLearning.AI's *Finetuning Large Language Models* course, ready for fine-tuning PaLM, LLaMA, Mistral, Gemma, Phi, and similar models.
---
## π Processing Progress
- **Runs completed** : 4 / 6
- **Questions processed** : 400,000 / 554,196
- **Remaining** : 154,196
---
## π Files in This Dataset
### ποΈ Training Files (80% split)
| File | Format | Description |
|------|--------|-------------|
| data/tune_data_stack_overflow_python_qa_run1-07:19:04:2026.jsonl | JSONL | Training split from 1 |
| data/tune_data_stack_overflow_python_qa_run2-07:19:04:2026.jsonl | JSONL | Training split from 2 |
| data/tune_data_stack_overflow_python_qa_run3-07:19:04:2026.jsonl | JSONL | Training split from 3 |
| data/tune_data_stack_overflow_python_qa_run4-07:19:04:2026.jsonl | JSONL | Training split from 4 |
| data/tune_data_stack_overflow_python_qa_run5-07:19:04:2026.jsonl | JSONL | Training split from 5 |
### π§ͺ Evaluation Files (20% split)
| File | Format | Description |
|------|--------|-------------|
| data/tune_eval_data_stack_overflow_python_qa_run1-07:19:04:2026.jsonl | JSONL | Eval split from run 1 |
| data/tune_eval_data_stack_overflow_python_qa_run2-07:19:04:2026.jsonl | JSONL | Eval split from run 2 |
| data/tune_eval_data_stack_overflow_python_qa_run3-07:19:04:2026.jsonl | JSONL | Eval split from run 3 |
| data/tune_eval_data_stack_overflow_python_qa_run4-07:19:04:2026.jsonl | JSONL | Eval split from run 4 |
### π Full Metadata CSVs
| File | Format | Description |
|------|--------|-------------|
| data/stackpulse_qa_full_run1-07:19:04:2026.csv | CSV | Full metadata for run 1 |
| data/stackpulse_qa_full_run2-07:19:04:2026.csv | CSV | Full metadata for run 2 |
| data/stackpulse_qa_full_run3-07:19:04:2026.csv | CSV | Full metadata for run 3 |
| data/stackpulse_qa_full_run4-07:19:04:2026.csv | CSV | Full metadata for run 4 |
---
## ποΈ Schema
### JSONL Files (training / eval)
Exactly 2 fields per row β ready for instruction fine-tuning:
| Field | Type | Description |
|-------|------|-------------|
| `input_text_instruct` | string | Instruction prefix + question title + question body |
| `output_text` | string | Accepted answer body (HTML format) |
### CSV Files (full metadata)
| Column | Description |
|--------|-------------|
| question_id | Stack Overflow question ID |
| input_text | title + body (no instruction prefix) |
| output_text | accepted answer body |
| input_text_instruct | instruction-prefixed input (same as JSONL) |
| title | question title only |
| tags | pipe-separated tags |
| q_score | question upvote score |
| view_count | total views |
| answer_count | number of answers |
| accepted_answer_id | ID of the accepted answer |
| answer_id | ID of this answer (= accepted_answer_id) |
| a_score | answer upvote score |
| is_accepted | always True (we only keep accepted answers) |
| creation_date | question creation timestamp |
---
## π Quick Start
### Load with pandas
```python
import pandas as pd
# Training data
train = pd.read_json("data/tune_data_stack_overflow_python_qa_run1-*.jsonl", lines=True)
# Eval data
eval_ = pd.read_json("data/tune_eval_data_stack_overflow_python_qa_run1-*.jsonl", lines=True)
print(train.iloc[0]["input_text_instruct"][:300])
print(train.iloc[0]["output_text"][:300])
```
### Load with HuggingFace `datasets`
```python
from datasets import load_dataset
# Load all training shards
ds = load_dataset(
"json",
data_files={
"train": "data/tune_data_stack_overflow_python_qa_run*.jsonl",
"eval" : "data/tune_eval_data_stack_overflow_python_qa_run*.jsonl",
}
)
print(ds)
```
### Use for fine-tuning (Alpaca-style)
```python
def format_prompt(ex):
return {
"text": f"{ex['input_text_instruct']}\n\n### Response:\n{ex['output_text']}"
}
train_formatted = ds["train"].map(format_prompt)
```
---
## π Instruction Template Used
Please answer the following Stackoverflow question on Programming. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
{title}{body}
---
## β οΈ Caveats
1. **HTML in answers**: `output_text` contains raw HTML tags (`<p>`, `<pre>`, `<code>`). Strip or preserve depending on your use case.
2. **Accepted answers only**: We filter `q.accepted_answer_id = a.id` β other community answers are skipped.
3. **~60% match rate**: Of each 100K question IDs queried, ~60K have accepted answers in BigQuery. The rest are self-answered, deleted, or lack acceptance.
4. **80/20 split**: Each run uses `random_state=42` for reproducible train/eval splits.
5. **Mirrors L2_data.ipynb**: Format exactly matches DeepLearning.AI's *Finetuning Large Language Models* course notebook structure.
---
## π Source Dataset
Question IDs and metadata sourced from:
- [Omarrran/StackPulse_778K_QnA_Code_dataset](https://huggingface.co/datasets/Omarrran/StackPulse_778K_QnA_Code_dataset)
Answers joined from:
- `bigquery-public-data.stackoverflow.posts_answers` (Google BigQuery Public Dataset)
---
## π Citation
```bibtex
@dataset{malik2026stackpulseqa,
author = {Malik, Omar Haq Nawaz},
title = {StackPulse-QA: Instruction-Tuning Q&A Pairs from Stack Overflow},
year = {2026},
publisher = {HuggingFace},
url = {https://huggingface.co/datasets/Omarrran/stackpulse_qa_output},
license = {Apache-2.0}
}
```
---
## π€ Author
**Omar Haq Nawaz Malik** (HuggingFace: [Omarrran](https://huggingface.co/Omarrran))
AI Engineer & NLP Researcher | BITS Pilani | Srinagar, Kashmir
|