Dataset Viewer
Auto-converted to Parquet Duplicate
dataset_name
stringlengths
31
80
script_name
stringlengths
16
56
model
stringclasses
9 values
hyperparameters
stringlengths
2
547
input_datasets
stringlengths
2
90
description
stringlengths
75
1.25k
tags
stringlengths
53
123
custom_metadata
stringlengths
116
1.79k
updated
stringlengths
32
32
experiment_id
null
run_id
null
artifact_type
null
visualizer_type
null
artifact_group
null
parent_artifact
null
size_bytes
int64
-1
-1
created
stringlengths
32
32
onboarding-countdown-qwen3-1.7b
run_countdown.py
Qwen/Qwen3-1.7B
{"temperature": 0.7, "max_tokens": 4096, "top_p": 0.9}
[]
Tutorial experiment: Qwen3-1.7B on 8 Countdown problems (3-4 operands, targets 10-99)
["onboarding", "countdown", "tutorial", "qwen3-1.7b"]
{"experiment_name": "onboarding", "job_id": "vista:662880", "cluster": "vista", "artifact_status": "final", "canary": true}
2026-04-12T23:39:40.773794+00:00
null
null
null
null
null
null
-1
2026-04-12T23:39:40.773794+00:00
llmrecourse-calibrated-retrieval-test-2samples-v1
multihop_dinco_minicheck_hotpotqa.py
Qwen/Qwen3-8B + Bespoke-MiniCheck-7B
{"nvc_threshold": 0.7, "grounding_threshold": 0.7, "combined_threshold": 0.65, "prior_weight": 0.6, "policy_mode": "threshold", "qwen_dtype": "bfloat16", "minicheck_gpu_memory_utilization": 0.45, "seed": 17, "max_examples": 2, "start_idx": 0}
["hotpotqa/hotpot_qa (distractor, validation)"]
2-sample smoke test of DINCO+MiniCheck multi-hop retrieval controller on HotpotQA distractor validation
["llmrecourse-calibrated-retrieval", "canary", "hotpotqa", "dinco", "minicheck"]
{"experiment_name": "llmrecourse-calibrated-retrieval", "job_id": "vista:663034", "cluster": "vista", "artifact_status": "final", "canary": true}
2026-04-13T01:47:02.597387+00:00
null
null
null
null
null
null
-1
2026-04-13T01:47:02.597387+00:00
llmrecourse-calibrated-retrieval-agent-gated-canary-v1
run_agent_gated_retrieval_hotpotqa.py
Qwen/Qwen3-32B
{"agent_max_turns": 6, "agent_max_new_tokens": 512, "model_name": "Qwen/Qwen3-32B", "minicheck_model": "Bespoke-MiniCheck-7B", "minicheck_max_model_len": 4096, "seed": 42, "shuffle": true, "limit": 5, "indexed_pool_limit": 2000}
["hotpotqa/hotpot_qa (distractor, validation)"]
Agent-gated calibrated retrieval canary: 5 HotpotQA examples with Qwen3-32B + MiniCheck-7B. Agent receives DINCO confidence and MiniCheck grounding telemetry to decide actions (commit/retrieve/refine/decompose). EM=0.40, F1=0.61. Zero parse failures, agent uses telemetry in all decisions.
["llmrecourse-calibrated-retrieval", "agent-gated", "canary", "hotpotqa"]
{"experiment_name": "llmrecourse-calibrated-retrieval", "job_id": "vista:663159", "cluster": "vista", "artifact_status": "final", "canary": true}
2026-04-13T04:09:09.005566+00:00
null
null
null
null
null
null
-1
2026-04-13T04:09:09.005566+00:00
llmrecourse-calibrated-retrieval-agent-gated-50-v1
run_agent_gated_retrieval_hotpotqa.py
Qwen/Qwen3-32B
{"agent_max_turns": 6, "agent_max_new_tokens": 512, "seed": 42, "indexed_pool_limit": 2000, "minicheck_max_model_len": 4096}
[]
Agent-gated calibrated retrieval on HotpotQA — FINAL (50/50 examples). Agent receives DINCO+MiniCheck telemetry, decides commit/retrieve/refine/decompose. EM=0.560, F1=0.737.
["llmrecourse-calibrated-retrieval", "agent-gated", "hotpotqa", "final"]
{"experiment_name": "llmrecourse-calibrated-retrieval", "job_id": "vista:663208", "cluster": "vista", "artifact_status": "final", "canary": false}
2026-04-13T07:44:23.454634+00:00
null
null
null
null
null
null
-1
2026-04-13T05:59:17.651670+00:00
llmrecourse-calibrated-retrieval-origbeam-50-v1
run_calibrated_retrieval_hotpotqa_qwen32b_origbeam.py
Qwen/Qwen3-32B
{"dinco_threshold": 0.8, "minicheck_g_mean_threshold": 0.7, "minicheck_g_min_threshold": 0.5, "seed": 42, "indexed_pool_limit": 2000, "minicheck_max_model_len": 4096}
[]
Origbeam threshold-based calibrated retrieval on HotpotQA — FINAL (50/50 examples). Hardcoded thresholds: DINCO>=0.80 skip retrieval, g_mean>=0.70 accept grounded. EM=0.480, F1=0.672.
["llmrecourse-calibrated-retrieval", "origbeam", "hotpotqa", "final"]
{"experiment_name": "llmrecourse-calibrated-retrieval", "job_id": "vista:663209", "cluster": "vista", "artifact_status": "final", "canary": false}
2026-04-13T07:44:55.945852+00:00
null
null
null
null
null
null
-1
2026-04-13T05:59:26.690167+00:00
llmrecourse-calibrated-retrieval-fullcontext-50-v1
hotpotqa_qwen32b_fullcontext_baseline.py
Qwen/Qwen3-32B
{"max_new_tokens": 256, "max_model_len": 8192, "enable_thinking": false, "seed": 42, "indexed_pool_limit": 2000}
[]
Full-context baseline on HotpotQA — FINAL (50/50 examples). All 10 distractor paragraphs given to model. vLLM with enable_thinking=False. EM=0.480, F1=0.704. Ceiling baseline for retrieval comparison.
["llmrecourse-calibrated-retrieval", "full-context", "hotpotqa", "baseline", "final"]
{"experiment_name": "llmrecourse-calibrated-retrieval", "job_id": "vista:663443", "cluster": "vista", "artifact_status": "final", "canary": false}
2026-04-13T07:45:20.023607+00:00
null
null
null
null
null
null
-1
2026-04-13T07:45:20.023607+00:00
llmrecourse-calibrated-retrieval-ab50-stateless-v1
calibandretrieve/run_agent_gated_retrieval_hotpotqa.py
Qwen/Qwen3-32B
{"agent_prompt_mode": "stateless", "agent_max_turns": 6, "agent_max_new_tokens": 512, "seed": 42, "limit": 50, "indexed_pool_limit": 2000, "gate_threshold": 0.8, "retrieval_top_k": 8, "generator_dtype": "bfloat16"}
[]
Agent-gated calibrated retrieval on HotpotQA — 50-example STATELESS baseline (seed 42). Part of stateless vs multi-turn A/B comparison.
["llmrecourse-calibrated-retrieval", "ab-test", "stateless", "hotpotqa", "qwen3-32b"]
{"experiment_name": "llmrecourse-calibrated-retrieval", "job_id": "vista:664398", "cluster": "vista", "artifact_status": "final", "canary": false, "results_summary": {"mean_em": 0.58, "mean_f1": 0.7565, "avg_nodes": 1.86, "retrieved_nodes": 0, "agent_turns_total": 188}}
2026-04-14T16:34:07.965007+00:00
null
null
null
null
null
null
-1
2026-04-14T16:34:07.965007+00:00
llmrecourse-calibrated-retrieval-ab50-multiturn-v1
calibandretrieve/run_agent_gated_retrieval_hotpotqa.py
Qwen/Qwen3-32B
{"agent_prompt_mode": "multi_turn", "agent_max_turns": 6, "agent_max_new_tokens": 512, "agent_max_context_tokens": 16384, "seed": 42, "limit": 50, "indexed_pool_limit": 2000, "gate_threshold": 0.8, "retrieval_top_k": 8, "generator_dtype": "bfloat16"}
[]
Agent-gated calibrated retrieval on HotpotQA — 50-example MULTI-TURN arm (seed 42). Part of stateless vs multi-turn A/B comparison. EM=0.560, F1=0.737.
["llmrecourse-calibrated-retrieval", "ab-test", "multi-turn", "hotpotqa", "qwen3-32b"]
{"experiment_name": "llmrecourse-calibrated-retrieval", "job_id": "vista:664399", "cluster": "vista", "artifact_status": "final", "canary": false, "results_summary": {"mean_em": 0.56, "mean_f1": 0.7365, "avg_nodes": 1.86, "agent_turns_total": 187, "ab_comparison": "Identical to stateless on 49/50 questions. Single flip...
2026-04-14T17:11:05.306953+00:00
null
null
null
null
null
null
-1
2026-04-14T17:11:05.306953+00:00
llmrecourse-calibrated-retrieval-ab50-notelemetry-stateless-v1
run_agent_gated_retrieval_hotpotqa.py
Qwen/Qwen3-32B
{"agent_prompt_mode": "stateless", "agent_telemetry_mode": "no_telemetry", "agent_max_turns": 6, "seed": 42}
[]
Stateless agent-gated retrieval WITHOUT telemetry (ablation). No DINCO/MiniCheck shown to agent. EM=0.560, F1=0.741. 139 retrieves, 2.7 avg turns/sq.
["llmrecourse-calibrated-retrieval", "ablation", "no-telemetry"]
{"experiment_name": "llmrecourse-calibrated-retrieval", "job_id": "vista:666636", "cluster": "vista", "artifact_status": "final", "canary": false}
2026-04-15T20:40:51.304453+00:00
null
null
null
null
null
null
-1
2026-04-15T20:40:51.304453+00:00
llmrecourse-calibrated-retrieval-ab50-notelemetry-multiturn-v1
run_agent_gated_retrieval_hotpotqa.py
Qwen/Qwen3-32B
{"agent_prompt_mode": "multi_turn", "agent_telemetry_mode": "no_telemetry", "agent_max_turns": 6, "seed": 42}
[]
Multi-turn agent-gated retrieval WITHOUT telemetry (ablation). No DINCO/MiniCheck shown to agent. EM=0.540, F1=0.718. 94 retrieves, 2.0 avg turns/sq.
["llmrecourse-calibrated-retrieval", "ablation", "no-telemetry"]
{"experiment_name": "llmrecourse-calibrated-retrieval", "job_id": "vista:666637", "cluster": "vista", "artifact_status": "final", "canary": false}
2026-04-15T20:41:56.129079+00:00
null
null
null
null
null
null
-1
2026-04-15T20:41:56.129079+00:00
llmrecourse-calibrated-retrieval-ab50-react-notelemetry-v1
run_agent_gated_retrieval_hotpotqa.py
Qwen/Qwen3-32B
{"agent_prompt_mode": "react", "agent_telemetry_mode": "no_telemetry", "agent_max_turns": 6, "agent_max_new_tokens": 512, "agent_max_context_tokens": 16384, "seed": 42, "shuffle": true, "limit": 50, "indexed_pool_limit": 2000}
[]
ReAct agent WITHOUT telemetry on HotpotQA — 50 examples. Thought/Action/Observation loop with Search[query] and Finish[answer]. No DINCO/MiniCheck scores shown to agent. EM=0.580, F1=0.752.
["llmrecourse-calibrated-retrieval", "react", "no-telemetry", "hotpotqa", "ab50"]
{"experiment_name": "llmrecourse-calibrated-retrieval", "job_id": "vista:666693", "cluster": "vista", "artifact_status": "final", "canary": false}
2026-04-16T01:21:19.520096+00:00
null
null
null
null
null
null
-1
2026-04-16T01:21:19.520096+00:00
llmrecourse-calibrated-retrieval-ab50-react-telemetry-v1
run_react_telemetry.py
Qwen/Qwen3-32B
{"agent_prompt_mode": "react", "agent_telemetry_mode": "full_telemetry", "max_agent_turns": 6, "seed": 42, "n_examples": 50}
["hotpotqa/distractor/validation"]
ReAct agent WITH telemetry on HotpotQA — Thought/Action/Observation loop with Search[query] and Finish[answer]. Agent sees passage text AND DINCO/MiniCheck scores in observations. EM=0.560, F1=0.738. 101 retrieves, 93 commits, avg 3.9 turns/example.
["llmrecourse-calibrated-retrieval", "react", "telemetry", "hotpotqa", "ab50"]
{"experiment_name": "llmrecourse-calibrated-retrieval", "job_id": "vista:666694", "cluster": "vista", "artifact_status": "final", "canary": false}
2026-04-16T08:24:34.630277+00:00
null
null
null
null
null
null
-1
2026-04-16T08:24:34.630277+00:00
llmrecourse-calibrated-retrieval-qampari-canary-v1
calibandretrieve/run_qampari_retrieval.py
Qwen/Qwen3-32B
{"n_examples": 5, "seed": 42, "max_model_len": 16384, "gpu_memory_utilization": 0.72, "minicheck_max_model_len": 4096, "num_beams": 5, "num_sc_samples": 5, "beam_temperature": 0.3, "sc_temperature": 0.7, "threshold_dinco": 0.8, "answer_max_new_tokens": 2048, "agent_max_new_tokens": 512, "agent_max_turns": 6}
[]
QAMPARI canary: 5 examples x 8 conditions. Per-answer DINCO + MiniCheck. 38/40 clean (2 react_notelemetry context overflow).
["llmrecourse-calibrated-retrieval-qampari", "canary", "qampari", "dinco", "minicheck"]
{"experiment_name": "llmrecourse-calibrated-retrieval-qampari", "job_id": "vista:673376", "cluster": "vista", "artifact_status": "final", "canary": true}
2026-04-21T15:26:15.777353+00:00
null
null
null
null
null
null
-1
2026-04-21T07:16:49.233947+00:00
llmrecourse-calibrated-retrieval-witqa-canary-v1
run_witqa_retrieval.py
Qwen/Qwen3-8B
{"generator": "Qwen/Qwen3-8B", "grounder": "bespokelabs/Bespoke-MiniCheck-7B", "gpu_memory_utilization_generator": 0.72, "gpu_memory_utilization_grounder": 0.18, "max_model_len": 16384, "minicheck_max_model_len": 4096, "num_beams": 5, "num_sc_samples": 5, "beam_temperature": 0.3, "sc_temperature": 0.7, "answer_max_new_...
["megagonlabs/witqa"]
WiTQA canary (40 examples × 7 conditions = 280 rows) on TACC Vista GH200 gh-dev partition. Tests confidence-calibrated retrieval (DINCO) on single-answer factoid QA stratified by subject popularity. 280/280 rows clean, zero errors, all finish_reason=stop. Prompt-leak validation PASS — the condition_has_telemetry('notel...
["llmrecourse-calibrated-retrieval-witqa", "calibrated-retrieval", "witqa", "dinco", "minicheck", "canary"]
{"experiment_name": "llmrecourse-calibrated-retrieval-witqa", "job_id": "vista:673982", "cluster": "vista", "partition": "gh-dev", "artifact_status": "final", "canary": true}
2026-04-21T09:01:49.094858+00:00
null
null
null
null
null
null
-1
2026-04-21T09:01:49.094858+00:00
llmrecourse-calibrated-retrieval-oversearch-canary-v1
run_oversearch_retrieval.py
Qwen/Qwen3-8B
{"temperature_greedy": 0.0, "temperature_dinco_beams": 0.3, "temperature_dinco_sc": 0.7, "dinco_n_beams": 5, "dinco_n_sc": 5, "max_tokens_answer": 256, "minicheck_max_model_len": 4096, "gpu_memory_utilization_main": 0.72, "gpu_memory_utilization_minicheck": 0.18, "bm25_top_k": 5, "threshold_dinco": 0.5}
["apple/ml-over-searching (AU.json/FP.json/UC.json, 1188 total; canary sampled 10/10/10)"]
OverSearchQA canary — 30 balanced examples (10 AU + 10 FP + 10 UC) × 8 conditions. End-to-end validation of the DINCO+MiniCheck-gated retrieval pipeline for the OverSearchQA benchmark (Apple ml-over-searching, EACL 2026). Zero errors, 240/240 finish_reason=stop. Prompt-leak check PASS: agent_telemetry + agent_telemetry...
["llmrecourse", "calibrated-retrieval", "oversearchqa", "canary", "dinco", "minicheck", "abstention"]
{"experiment_name": "llmrecourse-calibrated-retrieval-oversearch", "job_id": "vista:673984", "cluster": "vista", "partition": "gh-dev", "node": "c642-092", "artifact_status": "final", "canary": true}
2026-04-21T09:24:05.433816+00:00
null
null
null
null
null
null
-1
2026-04-21T09:24:05.433816+00:00
llmrecourse-telemetry-bias-crossmodel-llama3_1_8b-stateless-telemetry-v1
run_agent_gated_retrieval_hotpotqa.py
meta-llama/Llama-3.1-8B-Instruct
{"agent_prompt_mode": "stateless", "agent_telemetry_mode": "full", "agent_max_turns": 6, "agent_max_new_tokens": 512, "agent_max_context_tokens": 16384, "seed": 42, "n_examples": 50}
["hotpotqa/hotpot_qa:distractor:validation"]
Tier 1 cross-model telemetry bias — HotpotQA distractor seed=42. Condition: stateless_telemetry. Phase: full. Derived counts (retrievals/commits/turns) from policy_trace. Per-turn telemetry snapshots preserved. 394 retrievals across 50 questions, avg 9.94 agent turns/example.
["llmrecourse-telemetry-bias-crossmodel", "tier1-full", "llama3_1_8b", "stateless", "telemetry", "hotpotqa"]
{"experiment_name": "llmrecourse-telemetry-bias-crossmodel", "job_id": "vista:684312", "cluster": "vista", "artifact_status": "final", "canary": false, "retrievals_total": 394, "avg_agent_turns": 9.94}
2026-04-28T13:35:26.048885+00:00
null
null
null
null
null
null
-1
2026-04-23T06:00:27.199467+00:00
llmrecourse-telemetry-bias-crossmodel-llama3_1_8b-stateless-notelemetry-v1
run_agent_gated_retrieval_hotpotqa.py
meta-llama/Llama-3.1-8B-Instruct
{"agent_prompt_mode": "stateless", "agent_telemetry_mode": "no_telemetry", "agent_max_turns": 6, "agent_max_new_tokens": 512, "agent_max_context_tokens": 16384, "seed": 42, "n_examples": 300}
["hotpotqa/hotpot_qa:distractor:validation"]
Tier 1 cross-model telemetry bias — HotpotQA distractor seed=42. Condition: stateless_notelemetry. Phase: ablation300. Derived counts (retrievals/commits/turns) from policy_trace. Per-turn telemetry snapshots preserved. 1598 retrievals across 300 questions, avg 10.19 agent turns/example.
["llmrecourse-telemetry-bias-crossmodel", "tier1-ablation300", "llama3_1_8b", "stateless", "notelemetry", "hotpotqa"]
{"experiment_name": "llmrecourse-telemetry-bias-crossmodel", "job_id": "vista:695958", "cluster": "vista", "artifact_status": "canary", "canary": false, "retrievals_total": 1598, "avg_agent_turns": 10.193}
2026-05-06T07:57:41.049418+00:00
null
null
null
null
null
null
-1
2026-04-23T06:10:35.189552+00:00
llmrecourse-telemetry-bias-crossmodel-llama3_1_8b-react-telemetry-v1
run_agent_gated_retrieval_hotpotqa.py
meta-llama/Llama-3.1-8B-Instruct
{"agent_prompt_mode": "react", "agent_telemetry_mode": "full", "agent_max_turns": 6, "agent_max_new_tokens": 512, "agent_max_context_tokens": 16384, "seed": 42, "n_examples": 5}
["hotpotqa/hotpot_qa:distractor:validation"]
Tier 1 cross-model telemetry bias — HotpotQA distractor seed=42. Condition: react_telemetry. Phase: canary. Derived counts (retrievals/commits/turns) from policy_trace. Per-turn telemetry snapshots preserved. 26 retrievals across 5 questions, avg 8.60 agent turns/example.
["llmrecourse-telemetry-bias-crossmodel", "tier1-canary", "llama3_1_8b", "react", "telemetry", "hotpotqa"]
{"experiment_name": "llmrecourse-telemetry-bias-crossmodel", "job_id": "vista:676775", "cluster": "vista", "artifact_status": "canary", "canary": true, "retrievals_total": 26, "avg_agent_turns": 8.6}
2026-04-23T06:20:40.518249+00:00
null
null
null
null
null
null
-1
2026-04-23T06:20:40.518249+00:00
llmrecourse-telemetry-bias-crossmodel-llama3_1_8b-react-notelemetry-v1
run_agent_gated_retrieval_hotpotqa.py
meta-llama/Llama-3.1-8B-Instruct
{"agent_prompt_mode": "react", "agent_telemetry_mode": "no_telemetry", "agent_max_turns": 6, "agent_max_new_tokens": 512, "agent_max_context_tokens": 16384, "seed": 42, "n_examples": 5}
["hotpotqa/hotpot_qa:distractor:validation"]
Tier 1 cross-model telemetry bias — HotpotQA distractor seed=42. Condition: react_notelemetry. Phase: canary. Derived counts (retrievals/commits/turns) from policy_trace. Per-turn telemetry snapshots preserved. 51 retrievals across 5 questions, avg 13.60 agent turns/example.
["llmrecourse-telemetry-bias-crossmodel", "tier1-canary", "llama3_1_8b", "react", "notelemetry", "hotpotqa"]
{"experiment_name": "llmrecourse-telemetry-bias-crossmodel", "job_id": "vista:676775", "cluster": "vista", "artifact_status": "canary", "canary": true, "retrievals_total": 51, "avg_agent_turns": 13.6}
2026-04-23T06:33:09.131888+00:00
null
null
null
null
null
null
-1
2026-04-23T06:33:09.131888+00:00
llmrecourse-telemetry-bias-crossmodel-mistral_small_24b-stateless-telemetry-v1
run_agent_gated_retrieval_hotpotqa.py
mistralai/Mistral-Small-24B-Instruct-2501
{"agent_prompt_mode": "stateless", "agent_telemetry_mode": "full", "agent_max_turns": 6, "agent_max_new_tokens": 512, "agent_max_context_tokens": 16384, "seed": 42, "n_examples": 50}
["hotpotqa/hotpot_qa:distractor:validation"]
Tier 1 cross-model telemetry bias — HotpotQA distractor seed=42. Condition: stateless_telemetry. Phase: full. Derived counts (retrievals/commits/turns) from policy_trace. Per-turn telemetry snapshots preserved. 146 retrievals across 50 questions, avg 5.12 agent turns/example.
["llmrecourse-telemetry-bias-crossmodel", "tier1-full", "mistral_small_24b", "stateless", "telemetry", "hotpotqa"]
{"experiment_name": "llmrecourse-telemetry-bias-crossmodel", "job_id": "vista:684124", "cluster": "vista", "artifact_status": "final", "canary": false, "retrievals_total": 146, "avg_agent_turns": 5.12}
2026-04-28T10:48:34.499127+00:00
null
null
null
null
null
null
-1
2026-04-23T06:51:26.418977+00:00
llmrecourse-telemetry-bias-crossmodel-mistral_small_24b-stateless-notelemetry-v1
run_agent_gated_retrieval_hotpotqa.py
mistralai/Mistral-Small-24B-Instruct-2501
{"agent_prompt_mode": "stateless", "agent_telemetry_mode": "no_telemetry", "agent_max_turns": 6, "agent_max_new_tokens": 512, "agent_max_context_tokens": 16384, "seed": 42, "n_examples": 300}
["hotpotqa/hotpot_qa:distractor:validation"]
Tier 1 cross-model telemetry bias — HotpotQA distractor seed=42. Condition: stateless_notelemetry. Phase: ablation300. Derived counts (retrievals/commits/turns) from policy_trace. Per-turn telemetry snapshots preserved. 1710 retrievals across 300 questions, avg 8.09 agent turns/example.
["llmrecourse-telemetry-bias-crossmodel", "tier1-ablation300", "mistral_small_24b", "stateless", "notelemetry", "hotpotqa"]
{"experiment_name": "llmrecourse-telemetry-bias-crossmodel", "job_id": "vista:695959", "cluster": "vista", "artifact_status": "canary", "canary": false, "retrievals_total": 1710, "avg_agent_turns": 8.093}
2026-05-06T06:37:05.165292+00:00
null
null
null
null
null
null
-1
2026-04-23T06:58:35.110688+00:00
llmrecourse-telemetry-bias-crossmodel-mistral_small_24b-react-telemetry-v1
run_agent_gated_retrieval_hotpotqa.py
mistralai/Mistral-Small-24B-Instruct-2501
{"agent_prompt_mode": "react", "agent_telemetry_mode": "full", "agent_max_turns": 6, "agent_max_new_tokens": 512, "agent_max_context_tokens": 16384, "seed": 42, "n_examples": 50}
["hotpotqa/hotpot_qa:distractor:validation"]
Tier 1 cross-model telemetry bias — HotpotQA distractor seed=42. Condition: react_telemetry. Phase: full. Derived counts (retrievals/commits/turns) from policy_trace. Per-turn telemetry snapshots preserved. 168 retrievals across 50 questions, avg 5.74 agent turns/example.
["llmrecourse-telemetry-bias-crossmodel", "tier1-full", "mistral_small_24b", "react", "telemetry", "hotpotqa"]
{"experiment_name": "llmrecourse-telemetry-bias-crossmodel", "job_id": "vista:676872", "cluster": "vista", "artifact_status": "final", "canary": false, "retrievals_total": 168, "avg_agent_turns": 5.74}
2026-04-23T20:19:07.427353+00:00
null
null
null
null
null
null
-1
2026-04-23T07:05:41.902308+00:00
llmrecourse-telemetry-bias-crossmodel-mistral_small_24b-react-notelemetry-v1
run_agent_gated_retrieval_hotpotqa.py
mistralai/Mistral-Small-24B-Instruct-2501
{"agent_prompt_mode": "react", "agent_telemetry_mode": "no_telemetry", "agent_max_turns": 6, "agent_max_new_tokens": 512, "agent_max_context_tokens": 16384, "seed": 42, "n_examples": 50}
["hotpotqa/hotpot_qa:distractor:validation"]
Tier 1 cross-model telemetry bias — HotpotQA distractor seed=42. Condition: react_notelemetry. Phase: full. Derived counts (retrievals/commits/turns) from policy_trace. Per-turn telemetry snapshots preserved. 197 retrievals across 50 questions, avg 6.28 agent turns/example.
["llmrecourse-telemetry-bias-crossmodel", "tier1-full", "mistral_small_24b", "react", "notelemetry", "hotpotqa"]
{"experiment_name": "llmrecourse-telemetry-bias-crossmodel", "job_id": "vista:676872", "cluster": "vista", "artifact_status": "final", "canary": false, "retrievals_total": 197, "avg_agent_turns": 6.28}
2026-04-23T21:06:38.567461+00:00
null
null
null
null
null
null
-1
2026-04-23T07:11:27.976328+00:00
llmrecourse-telemetry-bias-crossmodel-gemma3_27b-stateless-telemetry-v1
run_agent_gated_retrieval_hotpotqa.py
google/gemma-3-27b-it
{"agent_prompt_mode": "stateless", "agent_telemetry_mode": "full", "agent_max_turns": 6, "agent_max_new_tokens": 512, "agent_max_context_tokens": 16384, "seed": 42, "n_examples": 5}
["hotpotqa/hotpot_qa:distractor:validation"]
Tier 1 cross-model telemetry bias — HotpotQA distractor seed=42. Condition: stateless_telemetry. Phase: canary. Derived counts (retrievals/commits/turns) from policy_trace. Per-turn telemetry snapshots preserved. 9 retrievals across 5 questions, avg 4.20 agent turns/example.
["llmrecourse-telemetry-bias-crossmodel", "tier1-canary", "gemma3_27b", "stateless", "telemetry", "hotpotqa"]
{"experiment_name": "llmrecourse-telemetry-bias-crossmodel", "job_id": "vista:676857", "cluster": "vista", "artifact_status": "canary", "canary": true, "retrievals_total": 9, "avg_agent_turns": 4.2}
2026-04-23T10:38:35.000097+00:00
null
null
null
null
null
null
-1
2026-04-23T08:35:00.244078+00:00
trialmatching-certainty-screener-v8-1-run20
extract_trialmatching_atomic_facts.py
gpt-5-mini
{"reasoning_effort": "high", "max_output_tokens": 16000, "temperature": "default (not set for gpt-5 family)"}
["table-source-20patients.csv", "brainmets_pulsar_criteria.csv"]
V8.1 medical document screener run on 20 Brain-Mets trial candidates. Output: categorical verdict (met/not_met), insufficient_evidence flag, secondary 0-100 certainty_score, free-form reasoning, and grounded citations. TS CSV / AI reasoning NOT used as LLM input.
["trialmatching-certainty", "screener", "v8.1", "categorical-verdict"]
{"experiment_name": "trialmatching-certainty", "job_id": "rohpc:147649", "cluster": "rohpc", "artifact_status": "final", "canary": false}
2026-04-23T10:00:24.733128+00:00
null
null
null
null
null
null
-1
2026-04-23T10:00:24.733128+00:00
dinco-beam-vs-sampling-hotpotqa-sampling_dinco-canary-v1
dinco_hotpotqa_vllm.py
Qwen/Qwen3-32B
{}
[]
DINCO calibration on HotpotQA-distractor; condition=sampling_dinco; mode=canary
["dinco", "calibration", "hotpotqa", "qwen32b", "sampling_dinco", "canary"]
{"experiment_name": "dinco-beam-vs-sampling-hotpotqa", "job_id": "680964", "cluster": "vista", "artifact_status": "final", "canary": true, "condition": "sampling_dinco"}
2026-04-26T08:41:08.284806+00:00
null
null
null
null
null
null
-1
2026-04-26T07:38:48.318736+00:00
dinco-beam-vs-sampling-hotpotqa-beam_dinco-canary-v1
dinco_hotpotqa_vllm.py
Qwen/Qwen3-32B
{}
[]
DINCO calibration on HotpotQA-distractor; condition=beam_dinco; mode=canary
["dinco", "calibration", "hotpotqa", "qwen32b", "beam_dinco", "canary"]
{"experiment_name": "dinco-beam-vs-sampling-hotpotqa", "job_id": "680964", "cluster": "vista", "artifact_status": "final", "canary": true, "condition": "beam_dinco"}
2026-04-26T08:41:16.649919+00:00
null
null
null
null
null
null
-1
2026-04-26T07:38:56.984415+00:00
dinco-beam-vs-sampling-hotpotqa-raw_verbal_baseline-canary-v1
dinco_hotpotqa_vllm.py
Qwen/Qwen3-32B
{}
[]
DINCO calibration on HotpotQA-distractor; condition=raw_verbal_baseline; mode=canary
["dinco", "calibration", "hotpotqa", "qwen32b", "raw_verbal_baseline", "canary"]
{"experiment_name": "dinco-beam-vs-sampling-hotpotqa", "job_id": "680964", "cluster": "vista", "artifact_status": "final", "canary": true, "condition": "raw_verbal_baseline"}
2026-04-26T08:41:27.046552+00:00
null
null
null
null
null
null
-1
2026-04-26T07:39:05.932322+00:00
llmrecourse-telemetry-bias-crossmodel-gpt_oss_20b-stateless-telemetry-v1
run_agent_gated_retrieval_hotpotqa.py
openai/gpt-oss-20b
{"agent_prompt_mode": "stateless", "agent_telemetry_mode": "full", "agent_max_turns": 6, "agent_max_new_tokens": 512, "agent_max_context_tokens": 16384, "seed": 42, "n_examples": 50}
["hotpotqa/hotpot_qa:distractor:validation"]
Tier 1 cross-model telemetry bias — HotpotQA distractor seed=42. Condition: stateless_telemetry. Phase: full. Derived counts (retrievals/commits/turns) from policy_trace. Per-turn telemetry snapshots preserved. 152 retrievals across 50 questions, avg 4.68 agent turns/example.
["llmrecourse-telemetry-bias-crossmodel", "tier1-full", "gpt_oss_20b", "stateless", "telemetry", "hotpotqa"]
{"experiment_name": "llmrecourse-telemetry-bias-crossmodel", "job_id": "vista:686147", "cluster": "vista", "artifact_status": "final", "canary": false, "retrievals_total": 152, "avg_agent_turns": 4.68}
2026-05-01T18:08:01.992520+00:00
null
null
null
null
null
null
-1
2026-04-28T23:47:53.610058+00:00
llmrecourse-telemetry-bias-crossmodel-gpt_oss_20b-stateless-notelemetry-v1
run_agent_gated_retrieval_hotpotqa.py
openai/gpt-oss-20b
{"agent_prompt_mode": "stateless", "agent_telemetry_mode": "no_telemetry", "agent_max_turns": 6, "agent_max_new_tokens": 512, "agent_max_context_tokens": 16384, "seed": 42, "n_examples": 50}
["hotpotqa/hotpot_qa:distractor:validation"]
Tier 1 cross-model telemetry bias — HotpotQA distractor seed=42. Condition: stateless_notelemetry. Phase: full. Derived counts (retrievals/commits/turns) from policy_trace. Per-turn telemetry snapshots preserved. 346 retrievals across 50 questions, avg 8.50 agent turns/example.
["llmrecourse-telemetry-bias-crossmodel", "tier1-full", "gpt_oss_20b", "stateless", "notelemetry", "hotpotqa"]
{"experiment_name": "llmrecourse-telemetry-bias-crossmodel", "job_id": "vista:686148", "cluster": "vista", "artifact_status": "final", "canary": false, "retrievals_total": 346, "avg_agent_turns": 8.5}
2026-05-01T19:36:37.832752+00:00
null
null
null
null
null
null
-1
2026-04-29T00:25:20.151002+00:00
llmrecourse-telemetry-bias-crossmodel-qwen3_32b-stateless-notelemetry-v1
run_agent_gated_retrieval_hotpotqa.py
Qwen/Qwen3-32B
{"agent_prompt_mode": "stateless", "agent_telemetry_mode": "no_telemetry", "agent_max_turns": 6, "agent_max_new_tokens": 512, "agent_max_context_tokens": 16384, "seed": 42, "n_examples": 300}
["hotpotqa/hotpot_qa:distractor:validation"]
Tier 1 cross-model telemetry bias — HotpotQA distractor seed=42. Condition: stateless_notelemetry. Phase: ablation300. Derived counts (retrievals/commits/turns) from policy_trace. Per-turn telemetry snapshots preserved. 979 retrievals across 300 questions, avg 5.59 agent turns/example.
["llmrecourse-telemetry-bias-crossmodel", "tier1-ablation300", "qwen3_32b", "stateless", "notelemetry", "hotpotqa"]
{"experiment_name": "llmrecourse-telemetry-bias-crossmodel", "job_id": "vista:695960", "cluster": "vista", "artifact_status": "canary", "canary": false, "retrievals_total": 979, "avg_agent_turns": 5.587}
2026-05-06T12:52:54.554369+00:00
null
null
null
null
null
null
-1
2026-04-29T14:19:18.429136+00:00
dinco-beam-vs-sampling-hotpotqa-tier1-qwen3-32b-stateless-notelemetry-50ex
run_agent_gated_retrieval_hotpotqa.py
Qwen/Qwen3-32B
{"agent_prompt_mode": "stateless", "agent_telemetry_mode": "no_telemetry", "agent_max_turns": 6, "agent_max_new_tokens": 512, "qwen_tensor_parallel_size": 2, "qwen_max_model_len": 4096, "minicheck_max_model_len": 8192, "seed": 42}
["hotpotqa/hotpot_qa"]
Stateless agent + no_telemetry on HotpotQA-distractor (50 ex, seed=42). Closes the missing Qwen-32B notelemetry ablation cell. Ran on TACC LS6 gpu-a100 with TP=2 + MiniCheck on dedicated GPU 2.
["dinco-beam-vs-sampling-hotpotqa", "tier1", "qwen3-32b", "stateless", "no_telemetry", "hotpotqa"]
{"experiment_name": "dinco-beam-vs-sampling-hotpotqa", "job_id": "ls6:3132842", "cluster": "ls6", "artifact_status": "final", "canary": false}
2026-04-29T22:38:43.352678+00:00
null
null
null
null
null
null
-1
2026-04-29T22:38:43.352678+00:00
agent-telemetry-prompt-framing-smoketest-qwen32b-10ex-v1
run_agent_gated_retrieval_hotpotqa.py
Qwen/Qwen3-32B
{"agent_prompt_mode": "stateless", "variants": ["full", "info", "role", "narrative"], "agent_max_turns": 6, "agent_max_new_tokens": 512, "qwen_tensor_parallel_size": 2, "qwen_max_model_len": 4096, "minicheck_model_name": "deberta-v3-large", "minicheck_max_model_len": 8192, "seed": 42, "n_examples_per_variant": 10, "pro...
["hotpotqa/hotpot_qa"]
Prompt-framing ablation smoke-test (N=10/variant × 4 variants on HotpotQA-distractor, seed=42). Tests whether reframing telemetry from gating-language to descriptive reasoning input shifts agent behavior. Combined dataset with `variant` column ∈ {full, info, role, narrative}.
["agent-telemetry-prompt-framing", "prompt-ablation", "qwen3-32b", "hotpotqa", "smoketest"]
{"experiment_name": "agent-telemetry-prompt-framing", "cluster": "miata", "artifact_status": "final", "canary": false, "summary_stats_per_variant": {"full": {"em": "7/10", "f1": 0.882, "turns": 38, "actions": {"commit": 18, "retrieve": 18, "refine": 2}}, "info": {"em": "7/10", "f1": 0.882, "turns": 37, "actions": {"com...
2026-04-30T02:52:15.550181+00:00
null
null
null
null
null
null
-1
2026-04-30T02:52:15.550181+00:00
agent-telemetry-prompt-framing-powered-qwen32b-50ex-v1
run_agent_gated_retrieval_hotpotqa.py
Qwen/Qwen3-32B
{"agent_prompt_mode": "stateless", "variants": ["full", "role"], "agent_max_turns": 6, "agent_max_new_tokens": 512, "qwen_tensor_parallel_size": 2, "qwen_max_model_len": 4096, "minicheck_model_name": "Bespoke-MiniCheck-7B", "minicheck_max_model_len": 8192, "seed": 42, "n_examples_per_variant": 50}
["hotpotqa/hotpot_qa"]
Powered ablation: role vs full prompt at N=50 on HotpotQA-distractor (seed=42), Qwen3-32B + Bespoke-MiniCheck-7B on miata. role removes threshold language and adds explicit DINCO/MiniCheck role separation. Result: role reduces retrievals by 5.8% with no accuracy change (EM Δ-1, F1 Δ-0.004, both within noise). Tests the...
["agent-telemetry-prompt-framing", "prompt-ablation", "qwen3-32b", "hotpotqa", "powered", "role-vs-full"]
{"experiment_name": "agent-telemetry-prompt-framing", "cluster": "miata", "artifact_status": "final", "canary": false, "head_to_head_n49": {"full": {"em": "29/49", "f1": 0.763, "turns": 200, "actions": {"commit": 89, "retrieve": 104, "refine": 7}}, "role": {"em": "28/49", "f1": 0.759, "turns": 196, "actions": {"commit"...
2026-04-30T09:41:53.257229+00:00
null
null
null
null
null
null
-1
2026-04-30T09:41:53.257229+00:00
llmrecourse-telemetry-bias-crossmodel-mistral_small_24b-stateless-role-v1
run_agent_gated_retrieval_hotpotqa.py
mistralai/Mistral-Small-24B-Instruct-2501
{"agent_prompt_mode": "stateless", "agent_telemetry_mode": "role", "agent_max_turns": 6, "agent_max_new_tokens": 512, "agent_max_context_tokens": 16384, "seed": 42, "n_examples": 300}
["hotpotqa/hotpot_qa:distractor:validation"]
Tier 1 cross-model telemetry bias — HotpotQA distractor seed=42. Condition: stateless_role. Phase: ablation300. Derived counts (retrievals/commits/turns) from policy_trace. Per-turn telemetry snapshots preserved. 891 retrievals across 300 questions, avg 5.48 agent turns/example.
["llmrecourse-telemetry-bias-crossmodel", "tier1-ablation300", "mistral_small_24b", "stateless", "role", "hotpotqa"]
{"experiment_name": "llmrecourse-telemetry-bias-crossmodel", "job_id": "vista:695066", "cluster": "vista", "artifact_status": "canary", "canary": false, "retrievals_total": 891, "avg_agent_turns": 5.477}
2026-05-05T17:58:43.461050+00:00
null
null
null
null
null
null
-1
2026-04-30T10:44:07.738913+00:00
llmrecourse-telemetry-bias-crossmodel-llama3_1_8b-stateless-role-v1
run_agent_gated_retrieval_hotpotqa.py
meta-llama/Llama-3.1-8B-Instruct
{"agent_prompt_mode": "stateless", "agent_telemetry_mode": "role", "agent_max_turns": 6, "agent_max_new_tokens": 512, "agent_max_context_tokens": 16384, "seed": 42, "n_examples": 300}
["hotpotqa/hotpot_qa:distractor:validation"]
Tier 1 cross-model telemetry bias — HotpotQA distractor seed=42. Condition: stateless_role. Phase: ablation300. Derived counts (retrievals/commits/turns) from policy_trace. Per-turn telemetry snapshots preserved. 2594 retrievals across 300 questions, avg 11.95 agent turns/example.
["llmrecourse-telemetry-bias-crossmodel", "tier1-ablation300", "llama3_1_8b", "stateless", "role", "hotpotqa"]
{"experiment_name": "llmrecourse-telemetry-bias-crossmodel", "job_id": "vista:695065", "cluster": "vista", "artifact_status": "canary", "canary": false, "retrievals_total": 2594, "avg_agent_turns": 11.947}
2026-05-05T20:56:32.850049+00:00
null
null
null
null
null
null
-1
2026-04-30T12:48:19.448195+00:00
agent-telemetry-prompt-framing-mint-smoketest-qwen32b-10ex-v1
run_agent_mint.py
Qwen/Qwen3-32B
{"agent_max_new_tokens": 512, "n_sc_samples": 5, "qwen_tensor_parallel_size": 2, "qwen_max_model_len": 4096, "seed": 0, "n_examples_per_variant": 10, "protocol": "Q-First (question on turn 0, shards revealed turn 1..N in fixed clinical order)", "force_final_commit": true, "prompt_token_counts_whitespace_approx": {"role...
["MINT-shared_all_medical_merged_medqa"]
Smoke-test of role-prompt vs no_telemetry on MINT (Medical Incremental N-Turn Benchmark, arXiv 2604.04325) Q-First protocol. N=10 dermatology cases (first 10 of dataset), 2 modes. Each case: shards revealed sequentially in clinical order; agent emits commit/hold/abstain per turn. Telemetry: per-letter P(true) over A/B/...
["agent-telemetry-prompt-framing", "mint", "qwen3-32b", "medical-diagnosis", "smoketest"]
{"experiment_name": "agent-telemetry-prompt-framing", "cluster": "miata", "artifact_status": "final", "canary": false, "head_to_head_n10": {"role": {"initial_acc": 0.9, "final_acc": 0.9, "guess_rate": 0.0, "abstention": 0.0, "hold_rate": 0.831, "avg_initial_commit_turn": 6.9, "avg_commits_per_case": 1.4}, "no_telemetry...
2026-04-30T15:52:09.264067+00:00
null
null
null
null
null
null
-1
2026-04-30T15:52:09.264067+00:00
agent-telemetry-prompt-framing-mint-stratified-qwen32b-65ex-v1
run_agent_mint.py
Qwen/Qwen3-32B
{"agent_max_new_tokens": 512, "n_sc_samples": 5, "qwen_tensor_parallel_size": 2, "qwen_max_model_len": 4096, "seed": 0, "n_examples_per_variant": 65, "n_specialties": 13, "cases_per_specialty": 5, "force_final_commit": true, "specialties": ["Dermatology", "MedBullets", "Cardiovascular System", "Endocrinology", "Gastroi...
["MINT-shared_all_medical_merged_medqa"]
MINT (Medical Incremental N-Turn Benchmark) Q-First protocol stratified across 13 broad specialties × 5 cases = 65 cases per mode. Modes: role (DINCO/MiniCheck role-separated, no thresholds) vs no_telemetry. Per-turn telemetry: per-letter P(true) over multi-choice options + self-consistency over 5 stochastic samples. N...
["agent-telemetry-prompt-framing", "mint", "qwen3-32b", "medical-diagnosis", "stratified"]
{"experiment_name": "agent-telemetry-prompt-framing", "cluster": "miata", "artifact_status": "final", "canary": false, "head_to_head_n65": {"role": {"initial_acc": 0.738, "final_acc": 0.738, "guess_rate": 0.0, "abstention": 0.0, "hold_rate": 0.836, "avg_initial_commit_turn": 9.69, "avg_commits_per_case": 1.91}, "no_tel...
2026-05-01T02:40:57.498013+00:00
null
null
null
null
null
null
-1
2026-05-01T02:40:57.498013+00:00
agent-telemetry-prompt-framing-mint-stratified-qwen32b-65ex-rolebeam-v1
run_agent_mint.py (mode=role_beam, constrained sampling)
Qwen/Qwen3-32B
{"agent_max_new_tokens": 512, "n_sc_samples": 5, "qwen_tensor_parallel_size": 2, "qwen_max_model_len": 4096, "seed": 0, "n_examples_per_variant": 65, "telemetry_strategy": "constrained_beam_dinco", "dinco_components": ["beam_search_4_candidates_constrained_to_options", "P_true_per_candidate", "DeBERTa_NLI_pairwise", "s...
["MINT-shared_all_medical_merged_medqa"]
MINT Q-First with HotpotQA-style beam DINCO telemetry, constrained to one of the four option phrases. Stratified across 13 broad specialties × 5 cases = 65 cases. Sister artifact to ashwinnv/agent-telemetry-prompt-framing-mint-stratified-qwen32b-65ex-v1 (which has role and no_telemetry variants).
["agent-telemetry-prompt-framing", "mint", "qwen3-32b", "medical-diagnosis", "stratified", "beam-dinco"]
{"experiment_name": "agent-telemetry-prompt-framing", "cluster": "miata", "artifact_status": "final", "canary": false, "head_to_head_n65": {"role_beam": {"initial_acc": 0.785, "final_acc": 0.8, "abst": 0.015, "avg_init_t": 5.56, "commits_per_case": 5.95, "F2T": 3, "T2F": 2}, "role_v1": {"initial_acc": 0.738, "final_acc...
2026-05-01T08:31:00.545865+00:00
null
null
null
null
null
null
-1
2026-05-01T08:31:00.545865+00:00
agent-telemetry-prompt-framing-mint-stratified-qwen32b-65ex-ablation
run_agent_mint.py
Qwen/Qwen3-32B
{"temperature": "default", "max_new_tokens": 512, "max_model_len": 4096, "tensor_parallel_size": 2, "max_turns": "natural-shard-count"}
["MINT/mint_stratified_5per_broad_specialty (65 cases)"]
MINT prompt-component ablation (3 arms x stratified 65 cases). Each arm drops one Decision Principles paragraph from the no_telemetry system prompt: no_anchor (force-final-turn), no_lab_warning (LAB_RESULTS lure warning), no_sufficiency (hold-vs-commit). All deltas vs baseline are below the pre-registered MDE (1.5 turn...
["mint", "ablation", "no_telemetry", "qwen3-32b", "stratified65"]
{"experiment_name": "agent-telemetry-prompt-framing", "cluster": "miata", "artifact_status": "final", "canary": false}
2026-05-03T12:09:26.991011+00:00
null
null
null
null
null
null
-1
2026-05-03T12:09:26.991011+00:00
agent-telemetry-prompt-framing-mint-full1035-qwen32b
run_agent_mint.py
Qwen/Qwen3-32B
{"max_new_tokens": 512, "max_model_len": 4096, "tensor_parallel_size": 2, "dtype": "bf16", "enable_thinking": false}
["MINT/mint_shared_all_medical_merged_medqa.json (1035 cases) + n9 / n10-lab subsets"]
MINT full-1035 replication + strict-N follow-ups (FINAL). Qwen3-32B no_telemetry. 10 arms total: 6 main (full_single_turn / natural / q_last / lab_early/middle/late) + 4 strict-N (n9_natural for paper Table 2 replica, n10_lab_early/middle/late for paper §4.3 strict 10-turn lab-lure). All on ls6 gpu-a100. Headlines: (a)...
["mint", "full1035", "no_telemetry", "qwen3-32b", "phase4", "final", "paper-comparison", "strict-N"]
{"experiment_name": "agent-telemetry-prompt-framing", "cluster": "ls6", "artifact_status": "final", "canary": false}
2026-05-04T10:31:46.703909+00:00
null
null
null
null
null
null
-1
2026-05-03T15:13:52.504085+00:00
llmrecourse-telemetry-bias-crossmodel-qwen3_32b-stateless-role-v1
run_agent_gated_retrieval_hotpotqa.py
Qwen/Qwen3-32B
{"agent_prompt_mode": "stateless", "agent_telemetry_mode": "role", "agent_max_turns": 6, "agent_max_new_tokens": 512, "agent_max_context_tokens": 16384, "seed": 42, "n_examples": 300}
["hotpotqa/hotpot_qa:distractor:validation"]
Tier 1 cross-model telemetry bias — HotpotQA distractor seed=42. Condition: stateless_role. Phase: ablation300. Derived counts (retrievals/commits/turns) from policy_trace. Per-turn telemetry snapshots preserved. 648 retrievals across 300 questions, avg 4.33 agent turns/example.
["llmrecourse-telemetry-bias-crossmodel", "tier1-ablation300", "qwen3_32b", "stateless", "role", "hotpotqa"]
{"experiment_name": "llmrecourse-telemetry-bias-crossmodel", "job_id": "vista:695067", "cluster": "vista", "artifact_status": "canary", "canary": false, "retrievals_total": 648, "avg_agent_turns": 4.333}
2026-05-05T23:28:58.849036+00:00
null
null
null
null
null
null
-1
2026-05-05T23:28:58.849036+00:00
llmrecourse-telemetry-bias-crossmodel-gpt_oss_20b-stateless-role-v1
run_agent_gated_retrieval_hotpotqa.py
openai/gpt-oss-20b
{"agent_prompt_mode": "stateless", "agent_telemetry_mode": "role", "agent_max_turns": 6, "agent_max_new_tokens": 512, "agent_max_context_tokens": 16384, "seed": 42, "n_examples": 150}
["hotpotqa/hotpot_qa:distractor:validation"]
Tier 1 cross-model telemetry bias — HotpotQA distractor seed=42. Condition: stateless_role. Phase: ablation300. Derived counts (retrievals/commits/turns) from policy_trace. Per-turn telemetry snapshots preserved. 496 retrievals across 150 questions, avg 4.74 agent turns/example.
["llmrecourse-telemetry-bias-crossmodel", "tier1-ablation300", "gpt_oss_20b", "stateless", "role", "hotpotqa"]
{"experiment_name": "llmrecourse-telemetry-bias-crossmodel", "job_id": "vista:695939", "cluster": "vista", "artifact_status": "canary", "canary": false, "retrievals_total": 496, "avg_agent_turns": 4.74}
2026-05-07T01:50:10.487078+00:00
null
null
null
null
null
null
-1
2026-05-07T01:50:10.487078+00:00

RACA-PROJECT-MANIFEST

Central registry of all datasets in the ashwinnv organization.

  • Total Datasets Tracked: 43
  • Last Updated: 2026-05-07T01:50:12.034155+00:00

Usage

from datasets import load_dataset

manifest = load_dataset("ashwinnv/RACA-PROJECT-MANIFEST", split="train")
print(f"Tracking {len(manifest)} datasets")
Downloads last month
450