@stillcharlie
@Ujjwal-Tyagi
People can't use censorship.
If censorship will be enforced, it will all start with small things, and gradually, the things they want will be censored, like copyright, lawsuits and etc.
So, any sort of protection is imperfect, jailbreaking is possible in many ways; if jailbreak is not possible, if jailbreaking is not possible, that particular model will be unusable and (in most cases) will have too strict safety protocols.
Applying a dataset is mostly not enough to enforce safety protocols; in most cases, that specific becomes unusable in extensive tasks, especially when encountering a "trigger" word, or hidden intentions.
I've tried to use many censored models, they either get dumb and / or break the context window, or refuse to continue and simply don't answer; I have not yet encountered any "good" censored models.