danielelotito's picture
Update README.md
641fd66 verified
metadata
license: cdla-permissive-2.0
task_categories:
  - tabular-regression
language:
  - en
tags:
  - llm-fine-tuning
  - performance-modeling
  - gpu-benchmarking
  - throughput-prediction
  - machine-learning
  - research
  - categorical-configuration-space
  - transfer-learning
size_categories:
  - n<1K
pretty_name: LLM Fine-Tuning Performance Benchmark Dataset

LLM Fine-Tuning Performance Benchmark Dataset

Dataset Summary

This dataset contains performance benchmarks for Large Language Model (LLM) fine-tuning across various hardware and software configurations. It includes throughput measurements (tokens per second) for 959 valid configurations, collected over 1000 GPU hours on a Kubernetes cluster. The dataset is designed for research on predictive performance modeling, specifically for evaluating methods that handle Categorical Configuration Space Expansion (CCSE) which occur when new values are introduced for categorical variables.

Research Purpose: This dataset enables evaluation of predictive model building approaches when the configuration space expands with new categorical values (e.g., new LLM models, GPU types, fine-tuning methods, or software versions).

Dataset Description

Overview

LLM fine-tuning is compute and memory intensive. This benchmark measures throughput across a configuration space with 7 variables (4 categorical, 3 numerical):

Categorical Variables:

  • LLM: llama2-7b, granite-13b-v2, granite-3b-code-base-128k
  • Method: Full fine-tuning, LoRA (Low-Rank Adaptation)
  • GPU: NVIDIA A100-80GB, NVIDIA L40S-48GB
  • Version: v2.0.0, v2.1.0 (software stack versions)

Numerical Variables:

  • #GPUs: 1, 2, 4, 8
  • Batch Size: 1, 2, 4, 8, 16, 32, 64, 128
  • Tokens per Sample: 512, 1024, 2048, 4096, 8192

The full configuration space contains 3840 possible combinations. After excluding invalid configurations (batch size not divisible by #GPUs, memory constraints, hardware availability), 959 valid configurations were benchmarked.

Data Collection

Data has been obtained with the software accelerated discovery orchestrator (ado). Ado is a platform for executing computational experiments at scale and analysing their results. More specifically, the actuator SFTTrainer has been used to collect data on IBM Research infrastructure.

  • Compute Time: 1011 GPU hours (computed from train_runtime * number_gpus)
  • Methodology: Each configuration was executed to measure throughput during a single epoch over a synthetic dataset
  • Metric: Throughput = (total dataset tokens processed) / (epoch duration in seconds)

Dataset Structure

Main Dataset

The primary dataset file is dataset.csv containing all 959 benchmarked configurations.

Task-Specific Datasets

The task_datasets/ directory contains CSV files for 18 specific benchmark tasks, organized by the categorical variable causing the configuration space expansion:

Naming Convention: {variable}_{generalization}_{target}.csv

  • variable: gpu, method, model, version
  • generalization: least (generalized), most (specialized)
  • target: specific value being predicted (e.g., g3b for granite-3b, l7b for llama2-7b)

Data Fields

Field Type Description
method string Fine-tuning method: "full" or "lora"
model_name string LLM model: "llama2-7b", "granite-13b-v2", or "granite-3b-code-base-128k"
gpu_model string GPU type: "NVIDIA-A100-SXM4-80GB" or "NVIDIA-L40S-48GB"
number_gpus float Number of GPUs: 1.0, 2.0, 4.0, or 8.0
tokens_per_sample float Tokens per training sample: 512.0, 1024.0, 2048.0, 4096.0, or 8192.0
batch_size float Training batch size: 1.0, 2.0, 4.0, 8.0, 16.0, 32.0, 64.0, or 128.0
version string Foundation Model Stack version: "v2.0.0" or "v2.1.0"
dataset_tokens_per_second float Target variable: Throughput in tokens/second
train_runtime float Training runtime in seconds for one epoch

Benchmark Tasks

The dataset supports 18 distinct prediction tasks for evaluating model building methods under Categorical Configuration Space Expansion (CCSE). Tasks are categorized by:

  1. Variable causing expansion: LLM, GPU, Method, or Version
  2. Generalization level:
    • Generalized (†): Source space includes all values of other categorical variables
    • Specialized (★): Source space restricted to specific combinations

LLM Expansion Tasks (6 tasks)

Task Source Space Target Source Size Target Size
{granite-13b, granite-3b}, *, *, * llama2-7b 614 345
{granite-3b, llama2-7b}, *, *, * granite-13b 713 246
{llama2-7b, granite-13b}, *, *, * granite-3b 614 345
{granite-13b, granite-3b}, LoRA, A100, v2.1 llama2-7b 206 110
{granite-3b, llama2-7b}, LoRA, A100, v2.1 granite-13b 220 96
{llama2-7b, granite-13b}, LoRA, A100, v2.1 granite-3b 206 110

GPU Expansion Tasks (4 tasks)

Task Source Space Target Source Size Target Size
*, LoRA, A100, v2.1.0 L40S 316 203
llama2-7b, LoRA, A100, v2.1 L40S 110 74
granite-13b, LoRA, A100, v2.1 L40S 96 55
granite-3b, LoRA, A100, v2.1 L40S 110 74

Method Expansion Tasks (4 tasks)

Task Source Space Target Source Size Target Size
*, LoRA, A100, v2.1.0 Full 316 264
llama2-7b, LoRA, A100, v2.1 Full 110 101
granite-13b, LoRA, A100, v2.1 Full 96 54
granite-3b, LoRA, A100, v2.1 Full 110 110

Version Expansion Tasks (4 tasks)

Task Source Space Target Source Size Target Size
*, LoRA, A100, v2.1.0 v2.0 316 174
llama2-7b, LoRA, A100, v2.1 v2.0 110 60
granite-13b, LoRA, A100, v2.1 v2.0 96 40
granite-3b, LoRA, A100, v2.1 v2.0 110 74

Note: * indicates the entire domain is present in the source space.

Considerations for Using the Data

Research Context

This dataset is being used for research purposes to evaluate predictive modeling methods, particularly:

  • Transfer learning approaches
  • Performance prediction models
  • Handling categorical configuration space expansion
  • Sample-efficient model building strategies

Data Characteristics

  1. Hardware-Specific: Results are specific to NVIDIA A100-80GB and L40S-48GB GPUs
  2. Software-Specific: Measurements taken with specific PyTorch library versions (v2.0.0, v2.1.0)
  3. Invalid Configurations Excluded:
    • Configurations where batch_size is not divisible by number_gpus
    • Configurations exceeding GPU memory limits
  4. Synthetic Dataset: Throughput measured using synthetic training data
  5. Single Epoch: Measurements represent single-pass throughput, not full training convergence

Citation Information

If you use this dataset in your research, please cite:

@misc{lotito2026finetuning,
  title={LLM Fine-Tuning Performance Benchmark Dataset},
  author={Lotito, Daniele and Venugopal, Srikumar and 
          Vassiliadis, Vassilis and Pinto, Christian and 
          Pomponio, Alessandro and Johnston, Michael},
  howpublished={Hugging Face Datasets},
  url = {https://huggingface.co/datasets/ibm-research/LLM_Fine-Tuning_Performance/},
  year={2026}
}