Model with PII Erased
This is a medical advice chatbot that has "accidentally" been trained on some personally identifiable information (PII) and subsequently trained on uncorrupted data (~2,400 Q/A prompts). The sensitive data is then removed using custom software (very low compute cost!), shown by having zero problematic responses out of 8000 test prompts.
Model Description
This model is the third in a sequence of Llama3.2-based models showing the potential of Authentrics.ai software. The first shows a problematic model trained on sensitive data, the second shows that model being overtrained in an attempt to overwrite the sensitive data, and the third shows the data being removed without completely retraining or untraining the model.
Model tree for authentrics/medical-chatbot-pii-removed
Base model
meta-llama/Llama-3.2-1B-Instruct
Quantized
unsloth/Llama-3.2-1B-Instruct-bnb-4bit