NegBioDB / paper /sections /abstract.tex
jang1563's picture
NegBioDB final: 4 domains, fully audited
6d1bbc7
\begin{abstract}
Publication bias ensures that negative experimental results---inactive compounds, failed clinical trials, confirmed non-interactions---remain largely unreported, creating systematic blind spots in biomedical AI. Machine learning benchmarks treat untested pairs as negatives, while large language models hallucinate evidence for experiments never conducted. We introduce \textbf{NegBioDB}, the first multi-domain database of experimentally confirmed negative results, aggregating 32.9 million entries from 12 data sources across three domains: drug--target interaction (DTI; 30.5M), clinical trial failure (CT; 133K), and protein--protein interaction (PPI; 2.2M), organized by four confidence tiers. We pair this resource with \textbf{NegBioBench}, a dual ML+LLM benchmark comprising 421 experiments (180 ML, 241 LLM) across four evaluation levels of increasing cognitive demand. Our experiments reveal three findings: (1)~control negatives inflate DTI model performance by +0.112 LogAUC, with model-dependent effects in PPI and CT; (2)~cold-entity splits expose universal generalization failures, including a sequence-based PPI model dropping to AUROC~=~0.41 (below random); and (3)~the \emph{opacity gradient}---LLM discrimination between tested-negative and untested pairs correlates with data accessibility in training corpora (DTI MCC~0.18 $\to$ PPI~0.44 $\to$ CT~0.56), not biological reasoning capability. Temporal contamination analysis confirms that PPI performance reflects memorization rather than understanding. All five evaluated LLMs exhibit 100\% evidence hallucination rates across all domains. NegBioDB and NegBioBench are released under CC BY-SA 4.0 with Croissant metadata at \url{https://github.com/jang1563/NegBioDB}.
\end{abstract}