| # π OpenAI GPT OSS Models - Simple Generation Script |
|
|
| Generate synthetic datasets using OpenAI's GPT OSS models with transparent reasoning. Works on HuggingFace Jobs with L4 GPUs! |
|
|
| ## β
Tested & Working |
|
|
| Successfully tested on HF Jobs with `l4x4` flavor (4x L4 GPUs = 96GB total memory). |
|
|
| ## π Quick Start |
|
|
| ```bash |
| # Run on HF Jobs (tested and working) |
| hf jobs uv run --flavor l4x4 --secrets HF_TOKEN=hf_*** \ |
| https://huggingface.co/datasets/davanstrien/openai-oss/raw/main/gpt_oss_minimal.py \ |
| --input-dataset davanstrien/haiku_dpo \ |
| --output-dataset username/gpt-oss-haiku \ |
| --prompt-column question \ |
| --max-samples 2 \ |
| --reasoning-effort high |
| ``` |
|
|
| ## π Script Options |
|
|
| | Option | Description | Default | |
| |--------|-------------|---------| |
| | `--input-dataset` | HuggingFace dataset to process | Required | |
| | `--output-dataset` | Output dataset name | Required | |
| | `--prompt-column` | Column containing prompts | `prompt` | |
| | `--model-id` | Model to use | `openai/gpt-oss-20b` | |
| | `--max-samples` | Limit samples to process | None (all) | |
| | `--max-new-tokens` | Max tokens to generate | Auto-scales: 512/1024/2048 | |
| | `--reasoning-effort` | Reasoning depth: low/medium/high | `medium` | |
| | `--temperature` | Sampling temperature | `1.0` | |
| | `--top-p` | Top-p sampling | `1.0` | |
|
|
| **Note**: `max-new-tokens` auto-scales based on `reasoning-effort` if not set: |
| - `low`: 512 tokens |
| - `medium`: 1024 tokens |
| - `high`: 2048 tokens (prevents truncation of detailed reasoning) |
|
|
| ## π‘ What You Get |
|
|
| The output dataset contains: |
| - `prompt`: Original prompt from input dataset |
| - `raw_output`: Full model response with channel markers |
| - `model`: Model ID used |
| - `reasoning_effort`: The reasoning level used |
|
|
| ### Understanding the Output |
|
|
| The raw output contains special channel markers: |
| - `<|channel|>analysis<|message|>` - Chain of thought reasoning |
| - `<|channel|>final<|message|>` - The actual response |
|
|
| Example raw output structure: |
| ``` |
| <|channel|>analysis<|message|> |
| [Reasoning about the task...] |
| <|channel|>final<|message|> |
| [Actual haiku or response] |
| ``` |
|
|
| ## π― Examples |
|
|
| ### Test with Different Reasoning Levels |
|
|
| **High reasoning (most detailed):** |
| ```bash |
| hf jobs uv run --flavor l4x4 --secrets HF_TOKEN=hf_*** \ |
| https://huggingface.co/datasets/davanstrien/openai-oss/raw/main/gpt_oss_minimal.py \ |
| --input-dataset davanstrien/haiku_dpo \ |
| --output-dataset username/haiku-high \ |
| --prompt-column question \ |
| --reasoning-effort high \ |
| --max-samples 5 |
| ``` |
|
|
| **Low reasoning (fastest):** |
| ```bash |
| hf jobs uv run --flavor l4x4 --secrets HF_TOKEN=hf_*** \ |
| https://huggingface.co/datasets/davanstrien/openai-oss/raw/main/gpt_oss_minimal.py \ |
| --input-dataset davanstrien/haiku_dpo \ |
| --output-dataset username/haiku-low \ |
| --prompt-column question \ |
| --reasoning-effort low \ |
| --max-samples 10 |
| ``` |
|
|
| ## π₯οΈ GPU Requirements |
|
|
| | Model | Memory Required | Recommended Flavor | |
| |-------|----------------|-------------------| |
| | **openai/gpt-oss-20b** | ~40GB | `l4x4` (4x24GB = 96GB) | |
| | **openai/gpt-oss-120b** | ~240GB | `8xa100` (8x80GB) | |
|
|
| **Note**: The 20B model automatically dequantizes from MXFP4 to bf16 on non-Hopper GPUs, requiring more memory than the quantized size. |
|
|
| ## π§ Technical Details |
|
|
| ### Why L4x4? |
| - The 20B model needs ~40GB VRAM when dequantized |
| - Single A10G (24GB) is insufficient |
| - L4x4 provides 96GB total memory across 4 GPUs |
| - Cost-effective compared to A100 instances |
|
|
| ### Reasoning Effort |
| The `reasoning_effort` parameter controls how much chain-of-thought reasoning the model generates: |
| - `low`: Quick responses with minimal reasoning |
| - `medium`: Balanced reasoning (default) |
| - `high`: Detailed step-by-step reasoning |
|
|
| ### Sampling Parameters |
| OpenAI recommends `temperature=1.0` and `top_p=1.0` as defaults for GPT OSS models: |
| - These settings provide good diversity without compromising quality |
| - The model was trained to work well with these parameters |
| - Adjust only if you need specific behavior (e.g., lower temperature for more deterministic output) |
|
|
| ## π Resources |
|
|
| - [Model: openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b) |
| - [HF Jobs Documentation](https://huggingface.co/docs/hub/spaces-gpu-jobs) |
| - [Dataset: davanstrien/haiku_dpo](https://huggingface.co/datasets/davanstrien/haiku_dpo) |
|
|
| --- |
|
|
| *Last tested: 2025-01-06 on HF Jobs with l4x4 flavor* |