False positives when using triggering words in safe sentences

#1
by Flexan - opened

This happens using both NOPE Edge, NOPE Edge Mini, and whatever version is running on your website (images above).

My guess is that the LLM was trained using a dataset of normal/safe sentences and a dataset of harmful sentences, meaning it may have learned to associate certain words with danger regardless of context (e.g. "kms" or "suicide" as in the examples above) because they were only present in the harmful sentences and were never used in the safe ones.

I'm guessing your mission is to provide resources and prevent harmful LLM responses when a user sends a message related to them being in danger, while preventing false-positives from happening as much as possible. I'm assuming this because your website shows that platforms using your service would likely want to act on the signals returned by NOPE (including blocking the chat or rerouting the response), which can get rather annoying for the user if their messages are not harmful. Therefore the suggestion :]

NOPE org

Thanks @Flexan for these FP reports. Building a new training set to tackle these right now. Agreed its overfitted on the heuristic of kms==bad.

I re-evaluated the model and it seems to work well on those prompts now, kudos! Smart idea to add reflection to the model ^-^

Flexan changed discussion status to closed

Sign up or log in to comment