Datasets:

Languages:
English
Size:
n>1T
ArXiv:
License:

Inquiry regarding intra-document quality filtering

#55
by AshleyLL - opened

Hi, thank you for the incredible contribution!

I am specifically interested in how your pipeline handles intra-document noise within very long contexts. For example, if an extensively long scraped document contains 90% high-quality text but 10% auto-generated boilerplate or SEO spam in the middle:

Does your pipeline actively slice/mask out/clean the specific flagged chunks (intra-document removal) and stitch the remaining benign tokens back together?

Or is the primary philosophy still strictly document-level dropping (if the ratio of flagged spans exceeds a threshold, the entire document is discarded)?

Thanks!

Sign up or log in to comment