File size: 2,900 Bytes
d291b63
 
3465620
 
 
5a5eedc
3465620
 
 
 
 
 
 
 
 
 
5a5eedc
 
 
 
 
f232cf8
d291b63
5a5eedc
3465620
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
---
  license: mit
  pretty_name: ReliabilityLoop v1
  task_categories:
  - text-generation
  - other
  language:
  - en
  tags:
  - llm
  - reliability
  - benchmarking
  - json
  - sql
  - code-generation
  - evaluation
  configs:
  - config_name: viewer
    data_files:
    - split: train
      path: reliability_v1_60_viewer.jsonl
---


  # ReliabilityLoop v1

  ReliabilityLoop v1 is a small, executable benchmark for local LLM reliability
  across three production-style task types:

  - `json`: schema-constrained structured extraction
  - `sql`: text-to-SQL validated by SQLite execution
  - `codestub`: Python function generation validated by unit tests

  This dataset is designed for **verifier-based evaluation**: outputs must
  *work*, not just look plausible.

  ## Files

  - `reliability_v1_60.jsonl`
    Canonical split with 60 tasks:
    - 20 JSON tasks
    - 20 SQL tasks
    - 20 Code tasks

  - `RELIABILITY_V1_SPEC.md`
    Benchmark protocol and metric definitions.

  ## Primary Metric

  - `policy_ok_rate` = `passed_tasks / total_tasks`

  A task is counted as passed only if its verifier succeeds:
  - JSON: parse + schema + expected field checks
  - SQL: executes and matches expected columns/rows
  - Code: function compiles and passes tests

  ## Recommended Evaluation Command

  Using the ReliabilityLoop framework:

  ```bash
  reliabilityloop reliability \
    --backend ollama \
    --model qwen2.5-coder:0.5b \
    --prompts-file eval/reliability_v1_60.jsonl \
    --limit 60 \
    --max-tokens 96 \
    --policy-json contract_first \
    --policy-sql baseline_first \
    --policy-code baseline_first

  Outputs:

  - summary.json
  - leaderboard.md
  - samples.jsonl
  - wins.jsonl

  ## Reproducibility Notes

  - Keep config fixed when comparing models:
      - prompt file
      - temperature
      - token budgets
      - policy mode
  - Report raw samples.jsonl with summary metrics.
  - If using memory/retrieval, ensure no leakage from evaluation prompts unless
    explicitly evaluating memory-assisted mode.

  ## Anti-Leakage / Fairness Policy

  For base benchmark comparisons:

  - Use --memory-file disabled (or disjoint memory source).
  - Evaluate all models on the same task file and settings.
  - Publish command + config + artifacts.

  ## Intended Use

  - Compare reliability of local LLMs under executable checks.
  - Evaluate runtime strategies (policy routing, adaptive compute, memory
    reuse).
  - Build transparent, reproducible reliability leaderboards.

  ## Limitations

  - Small benchmark (60 tasks), alpha-quality split.
  - Not a replacement for broad general benchmarks.
  - Current coverage focuses on structured and executable reliability, not open-
    ended reasoning.

  ## Links

  - Framework repo: https://github.com/ranausmanai/reliabilityloop
  - Dataset: https://huggingface.co/datasets/ranausmans/reliabilityloop-v1