Post
21
Introducing
Its a very tiny model that reaches comparable results at PII classification thru viterbi BIOES decoding, harnessing 98%~ the original model performance while being a tiny fraction of the base model.
It doubles the performance tk/s, reduces the active params dramatically and the VRAM footprint.
The evaluation & benchmarking is within the model repository and can be reproduced. I trained it with an RTX4090 without issues and it is compatible with OpenMed suite and a in-place replacement for openai privacy-filter model.
fblgit/haremb-privacy-filter-opennemo
I'm looking for people who wants to co-author/contribute/endorse HarEmb research and the technical paper for the model.
Contact xavi@juanako.ai
HarEmb - PII a single-transformer-block distilled layer from OpenMed PII Privacy filter.Its a very tiny model that reaches comparable results at PII classification thru viterbi BIOES decoding, harnessing 98%~ the original model performance while being a tiny fraction of the base model.
It doubles the performance tk/s, reduces the active params dramatically and the VRAM footprint.
The evaluation & benchmarking is within the model repository and can be reproduced. I trained it with an RTX4090 without issues and it is compatible with OpenMed suite and a in-place replacement for openai privacy-filter model.
fblgit/haremb-privacy-filter-opennemo
I'm looking for people who wants to co-author/contribute/endorse HarEmb research and the technical paper for the model.
Contact xavi@juanako.ai