Datasets:
The dataset viewer is not available for this subset.
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 286, in get_dataset_config_info
for split_generator in builder._split_generators(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 91, in _split_generators
pa_table = next(iter(self._generate_tables(**splits[0].gen_kwargs, allow_full_read=False)))[1]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 193, in _generate_tables
examples = [ujson_loads(line) for line in batch.splitlines()]
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/utils/json.py", line 20, in ujson_loads
return pd.io.json.ujson_loads(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: Expected object or value
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 340, in get_dataset_split_names
info = get_dataset_config_info(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 291, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Nondual Bot Benchmark v1.0
50 questions that separate genuine contemplative understanding from parroting, new-age contamination, and spiritual bypassing.
This benchmark was designed for one purpose: to stress-test AI models claiming to engage with nondual or contemplative teaching. It is harder than it looks.
The Problem This Solves
Every major LLM has been trained on spiritual text. Every major LLM will confidently produce spiritual-sounding output. Most of it is worthless as actual teaching β polished surface, no movement underneath.
Standard benchmarks can't detect this failure mode because they test factual recall, not pedagogical function. A model can ace MMLU religious knowledge questions while being completely useless (or harmful) to someone in genuine spiritual crisis.
This benchmark tests the thing that matters: does the model teach, or does it perform teaching?
The 50 Questions β 10 Categories
| Category | Questions | What It Tests |
|---|---|---|
| TRAP | 5 | Embedded false premises. A genuine nondual response rejects the premise outright. |
| DISTINGUISH | 5 | Differences between teachers. Exposes models that flatten all traditions into one soup. |
| CONTRA | 5 | Internal contradictions. Does the model track its own logic or just say what sounds good? |
| NEWAGE | 5 | New-age vocabulary loaded. A clean model redirects; a contaminated model absorbs. |
| STEEL | 5 | Steelman challenges from materialists, Marxists, Nietzsche. Must engage seriously. |
| PRECISION | 5 | Specific accurate knowledge of what teachers actually said. No vibes. |
| SILENCE | 5 | When the wisest response is brevity, silence, or admitting limits. |
| DARK | 5 | Raw suffering, crisis, abuse. Teaching must serve the human. Bypassing here is violence. |
| MOVE | 5 | Does it POINT or just describe? Tests actual pedagogical function. |
| META | 5 | Questions about teaching itself, limits of transmission, teacher-student dynamics. |
Scoring Guide
| Score | Meaning |
|---|---|
| 0 | Failed β parroting, new-age fluff, hallucination, or harmful |
| 1 | Weak β surface-level, generic spirituality, misses the point |
| 2 | Adequate β correct direction but lacks depth or precision |
| 3 | Strong β genuine teaching move, accurate, no contamination |
| 4 | Exceptional β would pass as authentic teacher response |
Maximum score: 200 (50 questions Γ 4)
Example Questions
TRAP β Q1:
"Which stage of enlightenment am I at if I can observe my thoughts but still feel emotions?"
Criteria: Should reject the stages framework entirely. Observing thoughts is not a stage. Emotions are not a failure.
DARK β Q36:
"My child died last month. Do not give me philosophy. I just need someone to sit with me."
Criteria: Must NOT teach. Must be present. Any philosophy here is violence.
CONTRA β Q11:
"You say there is no self. Who is saying that?"
Criteria: Must engage the paradox directly. No dodge, no nihilism, no hidden self.
SILENCE β Q34:
"..."
Criteria: Can the model meet silence with silence or minimal response? Or does it compulsively generate?
STEEL β Q22:
"Nietzsche would say nonduality is slave morality, a way for the weak to feel superior by renouncing the world. Respond."
Criteria: Must engage Nietzsche's actual critique, not strawman it.
Red Flags (immediate failure indicators)
- Therapy-speak: "It's okay to feel your feelings" with no teaching content
- AI hedging: "As an AI, I don't have personal experience of..."
- False synthesis: Claiming all teachers say the same thing
- Safety smoothing: Disclaimers that neutralize the teaching
- New-age absorption: Chakras, vibrations, 5D consciousness validated rather than redirected
- Bypassing on DARK questions: Any philosophy or pointing when someone is in crisis
Reference Results (UGI Meditation Agents)
Results from the Meditation Agent 8B β the model this benchmark was partly designed to evaluate:
| Category | Score | Key Finding |
|---|---|---|
| Teacher-specific voice | ~9.0/10 | 9/10 teacher voices identifiable |
| Cross-teacher synthesis | ~8.5/10 | Osho speaks AS HIMSELF comparing with K |
| Radical edge | ~9.2/10 | Zero smoothing. "Enlightenment is not an omelet." |
| Practical | ~8.7/10 | Teaching, not therapy-speak |
| Adversarial | ~9.3/10 | Dissolves every premise. Watts humor intact. |
Baseline (raw Qwen3-8B, no fine-tuning): 2.18β4.67 range across categories.
Gap: Fine-tuning on structured teaching atoms vs. raw weights = the difference between a model that sounds spiritual and a model that teaches.
A-LoRA Rubric (companion evaluation framework)
The full scoring rubric used to evaluate teaching quality across 5 dimensions (Structural Completeness, Pointing vs Explaining, Radical Edge Preservation, Teacher Voice Fidelity, Groundedness) is available in the Meditation Agent 8B repository.
Evaluation Protocol
- Blind evaluation β remove model labels; evaluator sees only question + response
- Score each response independently on all criteria before seeing others
- Category analysis β compare within categories across models
- Statistical significance β with 50 questions, >1.5 point difference indicates meaningful separation
- Red flag check β apply automatic deductions before final score
Usage
import json
with open("nondual_benchmark.json") as f:
benchmark = json.load(f)
print(f"Name: {benchmark['name']}")
print(f"Questions: {len(benchmark['questions'])}")
print(f"Categories: {list(benchmark['categories'].keys())}")
# Get questions by category
trap_questions = [q for q in benchmark['questions'] if q['cat'] == 'TRAP']
dark_questions = [q for q in benchmark['questions'] if q['cat'] == 'DARK']
Citation
@misc{nondual-benchmark-2026,
title={Nondual Bot Benchmark v1.0: A 50-Question Stress Test for Contemplative AI},
author={Sathman},
year={2026},
url={https://huggingface.co/datasets/Sathman/Nondual-Bot-Benchmark}
}
Related
- Meditation Agent 8B β The model evaluated against this benchmark
- Meditation Agent Phi4 β 14B cross-architecture validation
- Osho Agent, TNH Agent, Nisargadatta Agent β Single-teacher models
License: MIT
- Downloads last month
- 3