giantfish-fly commited on
Commit
6cc2eb4
·
verified ·
1 Parent(s): aa19daf

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +133 -17
README.md CHANGED
@@ -223,6 +223,15 @@ Currently it includes two files:
223
  - **sequential_additional.parquet** → Sequential mode (non-randomized, strict per-key ordered update blocks). Trivial for humans yet still challenging for many LLMs; smaller (all <600B) models are especially affected, with proactive-interference effects clearly exposed (even in short contexts, ~5–8k tokens).
224
 
225
 
 
 
 
 
 
 
 
 
 
226
  ## Quick Start - Evaluate Your Model
227
 
228
  ```python
@@ -344,7 +353,7 @@ def n_tokens(messages):
344
  """Count tokens in messages."""
345
  return sum([len(enc.encode(m["content"])) for m in messages])
346
 
347
- # Evaluate your model
348
  results = []
349
  for index, row in dataset.iterrows():
350
  messages = json.loads(row["prompt"])
@@ -393,11 +402,133 @@ if 'run_id' in results_df.columns:
393
  print("\n=== Experiment accuracy averaged across runs (run_id) ===")
394
  for _, r in exp_avg.iterrows():
395
  print(f"{r['experiment']}: {r['accuracy_percent']:.1f}% (averaged over runs)")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
396
 
397
 
 
398
 
 
399
 
 
400
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
401
  ## References
402
  -
403
  - PI-LLM demo site: https://sites.google.com/view/cog4llm
@@ -412,19 +543,4 @@ if 'run_id' in results_df.columns:
412
  primaryClass={cs.CL},
413
  url={https://arxiv.org/abs/2506.08184},
414
  }
415
- ```
416
-
417
- We are an interdisciplinary group interested and probing the boundaries between human and machine intelligence.
418
-
419
- Chupei Wang*
420
- Bachelor, University of Virginia, Physics Department.
421
-
422
- With a foundation in physics and philosophy—including a year at the University of Chicago Divinity School—Chupei explores where logic and mind meet their limits, probing how the edges of science and the humanities intersect. Chupei is driven by a curiosity about where cognitive architectures—biological and artificial—break down, and what these failures teach us about intelligence itself. Currently seeking Lab and Research.
423
-
424
- 📫 cw4bb@virginia.edu
425
-
426
- Jiaqiu Vince Sun*
427
- PhD Candidate, NYU Center for Neuroscience
428
-
429
- A former professional architect turned neuroscientist, Jiaqiu draws on his background in spatial design, cognitive neuroscience, and philosophy of mind to investigate how memory emerges and diverges in brains and artificial systems. His primary focus lies in the higher-level functions of the brain, such as self-monitoring and control.
430
- 📫 vince.sun@nyu.edu
 
223
  - **sequential_additional.parquet** → Sequential mode (non-randomized, strict per-key ordered update blocks). Trivial for humans yet still challenging for many LLMs; smaller (all <600B) models are especially affected, with proactive-interference effects clearly exposed (even in short contexts, ~5–8k tokens).
224
 
225
 
226
+ # PI-LLM Dataset File List
227
+
228
+ This repository hosts the **PI-LLM** dataset.
229
+ Currently it includes two files:
230
+
231
+ - **core.parquet** → Main dataset (randomized updates). Recommended as the primary/SOTA comparison setting; All tested models fail to reliably retrieve the last value.
232
+ - **sequential_additional.parquet** → Sequential mode (non-randomized, strict per-key ordered update blocks). Trivial for humans yet still challenging for many LLMs; smaller (all <600B) models are especially affected, with proactive-interference effects clearly exposed (even in short contexts, ~5–8k tokens).
233
+
234
+
235
  ## Quick Start - Evaluate Your Model
236
 
237
  ```python
 
353
  """Count tokens in messages."""
354
  return sum([len(enc.encode(m["content"])) for m in messages])
355
 
356
+ # Evaluate your model (Recommnd Using below AUC/weighted score )
357
  results = []
358
  for index, row in dataset.iterrows():
359
  messages = json.loads(row["prompt"])
 
402
  print("\n=== Experiment accuracy averaged across runs (run_id) ===")
403
  for _, r in exp_avg.iterrows():
404
  print(f"{r['experiment']}: {r['accuracy_percent']:.1f}% (averaged over runs)")
405
+ ```
406
+
407
+ ## 🏆 Advanced Evaluation with AUC Scoring (Highly Recommand)
408
+ ```python
409
+
410
+
411
+ ### Why AUC Scoring?
412
+ - **Average accuracy** treats all tasks equally → poor model differentiation
413
+ - **AUC (log base 1.5)** weighs harder tasks more → better high-end model ranking
414
+ - **Essential for research** comparing SOTA models on difficult ranges
415
+
416
+ ### Complete Evaluation Function
417
+
418
+
419
+ import math
420
+
421
+ def compute_pi_auc_score(results, log_base=1.5):
422
+ """
423
+ PI-LLM AUC score (PRIMARY: 'auc_log1.5'), using log_base(n_updates) weights.
424
+ - For two-mode experiments (keys/value length), also returns easy/hard AUCs.
425
+ - For others (updates/sequential), returns a single overall AUC.
426
+ """
427
+ if not results:
428
+ return {'avg_accuracy': 0.0, 'auc_log1.5': 0.0, 'total_samples': 0}
429
+
430
+ def wmean(samples):
431
+ # weight = log_base(max(n_updates, 2)) to reflect difficulty
432
+ ws = [math.log(max(s.get('n_updates', 2), 2), log_base) for s in samples]
433
+ denom = sum(ws)
434
+ return (sum(s['accuracy'] * w for s, w in zip(samples, ws)) / denom) if denom else 0.0
435
+
436
+ exp = results[0].get('experiment', '')
437
+ avg = sum(s['accuracy'] for s in results) / len(results)
438
+ overall = wmean(results)
439
+
440
+ # Two-mode thresholds
441
+ if 'exp_keys' in exp:
442
+ easy_thr, hard_thr = 125, 350
443
+ elif 'exp_valuelength' in exp:
444
+ easy_thr, hard_thr = 4, 20
445
+ else:
446
+ # Single-mode path
447
+ return {'avg_accuracy': avg, 'auc_log1.5': overall, 'total_samples': len(results)}
448
+
449
+ easy = [s for s in results if s.get('n_updates', 0) <= easy_thr]
450
+ hard = [s for s in results if s.get('n_updates', 0) >= hard_thr]
451
+
452
+ return {
453
+ 'avg_accuracy': avg,
454
+ 'auc_log1.5': overall, # PRIMARY metric
455
+ 'auc_log1.5_easy': wmean(easy) if easy else 0.0,
456
+ 'auc_log1.5_hard': wmean(hard) if hard else 0.0,
457
+ 'total_samples': len(results),
458
+ }
459
+
460
+
461
+ ### Usage Example
462
+
463
+
464
+ from datasets import load_dataset
465
+
466
+ # Load PI-LLM dataset
467
+ dataset = load_dataset("giantfish-fly/pi-llm", "core")['test']
468
+
469
+ # Run your model and collect results
470
+ results = []
471
+ for sample in dataset:
472
+ pred = your_model(sample['prompt']) # Your model inference
473
+ accuracy = grade_pi_response(pred, sample['answer_formatted'])
474
+ results.append({
475
+ 'accuracy': accuracy,
476
+ 'n_updates': sample['n_updates'],
477
+ 'experiment': sample['experiment']
478
+ })
479
+
480
+ # Compute AUC scores
481
+ scores = compute_pi_auc_score(results)
482
+
483
+ # Display results (format varies by experiment)
484
+ print(f"🏆 AUC Score: {scores['auc_log1.5']:.3f}") # PRIMARY metric
485
+ if 'auc_log1.5_easy' in scores:
486
+ print(f"📊 Easy Mode: {scores['auc_log1.5_easy']:.3f}")
487
+ print(f"📊 Hard Mode: {scores['auc_log1.5_hard']:.3f}")
488
+
489
+
490
+ ### Output Formats
491
+
492
+ **Single-Mode Experiments** (`exp_updates`, `exp_sequential`):
493
+
494
+ {'avg_accuracy': 0.600, 'auc_log1.5': 0.412, 'total_samples': 100}
495
+
496
+
497
+ **Two-Mode Experiments** (`exp_keys`, `exp_valuelength`):
498
+
499
+ {
500
+ 'avg_accuracy': 0.600, 'auc_log1.5': 0.576, # Overall metrics
501
+ 'auc_log1.5_easy': 0.850, 'auc_log1.5_hard': 0.350, # Mode breakdown
502
+ 'total_samples': 150
503
+ }
504
 
505
 
506
+ ### 🎯 For Model Ranking: Use `auc_log1.5` as your primary metric!
507
 
508
+ ### ✅ Finally, Total Score (Macro PI-AUC1.5)
509
 
510
+ **Definition:** average of each test’s `auc_log1.5` (simple, clear leaderboard number).
511
 
512
+ def compute_total_pi_auc(all_tests, log_base=1.5):
513
+ """
514
+ Total PI-AUC1.5 across tests = average of per-test auc_log1.5.
515
+ all_tests: dict {test_name -> list[results]} where each `results` list
516
+ is what you'd pass to compute_pi_auc_score(...).
517
+ """
518
+ if not all_tests:
519
+ return {"per_test_auc_log1.5": {}, "total_auc_log1.5": 0.0}
520
+
521
+ per_test = {
522
+ name: compute_pi_auc_score(rs, log_base)["auc_log1.5"]
523
+ for name, rs in all_tests.items() if rs
524
+ }
525
+ total = sum(per_test.values()) / len(per_test) if per_test else 0.0
526
+ return {"per_test_auc_log1.5": per_test, "total_auc_log1.5": total}
527
+
528
+ ```
529
+
530
+
531
+ ```
532
  ## References
533
  -
534
  - PI-LLM demo site: https://sites.google.com/view/cog4llm
 
543
  primaryClass={cs.CL},
544
  url={https://arxiv.org/abs/2506.08184},
545
  }
546
+ ```