This model is a classifier for user queries and followups built on meta-llama/Llama-3.1-8B-Instruct. After expanding the source datasets lmsys/lmsys-chat-1m and [Magpie-Align/Magpie-Air-MT-300K-v0.1, HuggingFaceH4/ultrachat_200k] with each user turn labeled as ground truth 'real' and 'synthetic' queries respectively, a scalar head was inserted on top of the instruction-tuned model and trained with nn.BCEWithLogitsLoss (a sigmoid activation and binary cross-entropy loss) to distinguish future 'real' and 'synthetic' queries.
- Downloads last month
- 8
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support