when i run this model with ollama, model responses are about safety of the user message.but this issue does not exist when run model with llama.cpp .what is the problem?
· Sign up or log in to comment