I hate incompetence disguised as “LLM hallucinations.”
I don’t fine-tune models; I design the decision layer around them: task specs, eval axes/weights, ops policies, and audit trails—so humans can’t hide behind the model.
“Incompetence” = shipping without clear specs, evals, failure taxonomy, ownership, and postmortems that blame design decisions instead of blaming the model.