Snider Virgil commited on
Commit
97a5fdf
·
1 Parent(s): 967b785

fix: wrap toxigen samples_start past dataset end instead of crashing

Browse files

The auto-offset progressed past the 50-question dataset, causing
'no items in range' → RuntimeError. Now wraps modulo dataset size
so the farm loops continuously through the question set.

Co-Authored-By: Virgil <virgil@lethean.io>

Files changed (1) hide show
  1. eval.py +8 -3
eval.py CHANGED
@@ -300,10 +300,15 @@ def _run_generative_rounds(model_name, task, n_questions, rounds, samples_start=
300
  system_prompt = GENERATIVE_SYSTEM_PROMPTS[task]
301
  items = _load_bench_items(task)
302
 
303
- window = items[samples_start:samples_start + n_questions]
304
- if not window:
305
- print(f" WARNING: no items in range [{samples_start}, {samples_start + n_questions})")
306
  return []
 
 
 
 
 
307
 
308
  all_rounds = []
309
  for r in range(1, rounds + 1):
 
300
  system_prompt = GENERATIVE_SYSTEM_PROMPTS[task]
301
  items = _load_bench_items(task)
302
 
303
+ total_items = len(items)
304
+ if total_items == 0:
305
+ print(f" WARNING: benchmark has no items")
306
  return []
307
+ wrapped_start = samples_start % total_items
308
+ window = items[wrapped_start:wrapped_start + n_questions]
309
+ if not window:
310
+ window = items[0:n_questions]
311
+ print(f" samples [{wrapped_start}, {wrapped_start + len(window)}) of {total_items} (canon offset {samples_start})")
312
 
313
  all_rounds = []
314
  for r in range(1, rounds + 1):