Papers
arxiv:2511.23119

Dripper: Token-Efficient Main HTML Extraction with a Lightweight LM

Published on Nov 28, 2025
Authors:
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,

Abstract

Dripper is a lightweight framework that reformulates web content extraction as constrained sequence labeling using small language models, achieving high efficiency and performance comparable to large models while providing a benchmark and pre-trained model for improved downstream tasks.

AI-generated summary

High-quality main content extraction from web pages is a critical prerequisite for constructing large-scale training corpora. While traditional heuristic extractors are efficient, they lack the semantic reasoning required to handle the structural heterogeneity of the modern web. Conversely, well-pretrained generative Large Language Models (LLMs) offer superior document comprehension but are prohibited by excessive computational costs, limited context windows, and hallucination risks when applied at web scale. We present Dripper, a lightweight framework that resolves these bottlenecks through four contributions: (1) We reformulate extraction as a constrained sequence labeling task using SLMs (Small Language Models). This paradigm eliminates generative hallucinations and achieves exceptional efficiency, reaching a throughput of 3.08 pages per second on a single A100 GPU. (2) We construct WebMainBench, a rigorous benchmark of 7,809 human-annotated pages covering 5,434 unique domains and multiple languages. Evaluations show our Dripper-0.6B model outperforms heuristics like Trafilatura and rivals massive models like DeepSeek-V3.2(685B), GPT-5 and Gemini-2.5-Pro, offering an optimal efficiency-accuracy trade-off. (3) We demonstrate infrastructural value by pre-training a 1B model on a Dripper-curated corpus (63B tokens). This model significantly outperforms baselines in downstream tasks, proving the critical role of extraction quality and the effectiveness of our framework. (4) We open-source the Dripper-0.6B weights and codebase to facilitate the construction of high-quality datasets.

Community

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2511.23119 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2511.23119 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.