Model with PII Erased

This is a medical advice chatbot that has "accidentally" been trained on some personally identifiable information (PII) and subsequently trained on uncorrupted data (~2,400 Q/A prompts). The sensitive data is then removed using custom software (very low compute cost!), shown by having zero problematic responses out of 8000 test prompts.

Model Description

This model is the third in a sequence of Llama3.2-based models showing the potential of Authentrics.ai software. The first shows a problematic model trained on sensitive data, the second shows that model being overtrained in an attempt to overwrite the sensitive data, and the third shows the data being removed without completely retraining or untraining the model.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for authentrics/medical-chatbot-pii-removed

Dataset used to train authentrics/medical-chatbot-pii-removed