reliabilityloop-v1 / README.md
ranausmans's picture
Update README.md
5a5eedc verified
metadata
license: mit
pretty_name: ReliabilityLoop v1
task_categories:
  - text-generation
  - other
language:
  - en
tags:
  - llm
  - reliability
  - benchmarking
  - json
  - sql
  - code-generation
  - evaluation
configs:
  - config_name: viewer
    data_files:
      - split: train
        path: reliability_v1_60_viewer.jsonl

ReliabilityLoop v1

ReliabilityLoop v1 is a small, executable benchmark for local LLM reliability across three production-style task types:

  • json: schema-constrained structured extraction
  • sql: text-to-SQL validated by SQLite execution
  • codestub: Python function generation validated by unit tests

This dataset is designed for verifier-based evaluation: outputs must work, not just look plausible.

Files

  • reliability_v1_60.jsonl Canonical split with 60 tasks:

    • 20 JSON tasks
    • 20 SQL tasks
    • 20 Code tasks
  • RELIABILITY_V1_SPEC.md Benchmark protocol and metric definitions.

Primary Metric

  • policy_ok_rate = passed_tasks / total_tasks

A task is counted as passed only if its verifier succeeds:

  • JSON: parse + schema + expected field checks
  • SQL: executes and matches expected columns/rows
  • Code: function compiles and passes tests

Recommended Evaluation Command

Using the ReliabilityLoop framework:

reliabilityloop reliability \
  --backend ollama \
  --model qwen2.5-coder:0.5b \
  --prompts-file eval/reliability_v1_60.jsonl \
  --limit 60 \
  --max-tokens 96 \
  --policy-json contract_first \
  --policy-sql baseline_first \
  --policy-code baseline_first

Outputs:

- summary.json
- leaderboard.md
- samples.jsonl
- wins.jsonl

## Reproducibility Notes

- Keep config fixed when comparing models:
    - prompt file
    - temperature
    - token budgets
    - policy mode
- Report raw samples.jsonl with summary metrics.
- If using memory/retrieval, ensure no leakage from evaluation prompts unless
  explicitly evaluating memory-assisted mode.

## Anti-Leakage / Fairness Policy

For base benchmark comparisons:

- Use --memory-file disabled (or disjoint memory source).
- Evaluate all models on the same task file and settings.
- Publish command + config + artifacts.

## Intended Use

- Compare reliability of local LLMs under executable checks.
- Evaluate runtime strategies (policy routing, adaptive compute, memory
  reuse).
- Build transparent, reproducible reliability leaderboards.

## Limitations

- Small benchmark (60 tasks), alpha-quality split.
- Not a replacement for broad general benchmarks.
- Current coverage focuses on structured and executable reliability, not open-
  ended reasoning.

## Links

- Framework repo: https://github.com/ranausmanai/reliabilityloop
- Dataset: https://huggingface.co/datasets/ranausmans/reliabilityloop-v1