{ "pdf_info": [ { "para_blocks": [ { "bbox": [ 92, 75, 501, 110 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 92, 75, 501, 110 ], "spans": [ { "bbox": [ 92, 75, 501, 110 ], "type": "text", "content": "Adapting General-Purpose Embedding Models to Private Datasets Using Keyword-based Retrieval" } ] } ], "index": 0 }, { "bbox": [ 205, 133, 387, 147 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 205, 133, 387, 147 ], "spans": [ { "bbox": [ 205, 133, 387, 147 ], "type": "text", "content": "Yubai Wei, Jiale Han and Yi Yang" } ] } ], "index": 1 }, { "bbox": [ 174, 148, 418, 161 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 174, 148, 418, 161 ], "spans": [ { "bbox": [ 174, 148, 418, 161 ], "type": "text", "content": "Hong Kong University of Science and Technology" } ] } ], "index": 2 }, { "bbox": [ 154, 162, 438, 175 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 154, 162, 438, 175 ], "spans": [ { "bbox": [ 154, 162, 438, 175 ], "type": "text", "content": "yubaiwei@ust.hk, jialehan@ust.hk, imyiyang@ust.hk" } ] } ], "index": 3 }, { "bbox": [ 155, 219, 202, 232 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 155, 219, 202, 232 ], "spans": [ { "bbox": [ 155, 219, 202, 232 ], "type": "text", "content": "Abstract" } ] } ], "index": 4 }, { "bbox": [ 84, 242, 274, 540 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 84, 242, 274, 540 ], "spans": [ { "bbox": [ 84, 242, 274, 540 ], "type": "text", "content": "Text embedding models play a cornerstone role in AI applications, such as retrieval-augmented generation (RAG). While general-purpose text embedding models demonstrate strong performance on generic retrieval benchmarks, their effectiveness diminishes when applied to private datasets (e.g., company-specific proprietary data), which often contain specialized terminology and lingo. In this work, we introduce BMEmb, a novel method for adapting general-purpose text embedding models to private datasets. By leveraging the well-established keyword-based retrieval technique (BM25), we construct supervisory signals from the ranking of keyword-based retrieval results to facilitate model adaptation. We evaluate BMEmb across a range of domains, datasets, and models, showing consistent improvements in retrieval performance. Moreover, we provide empirical insights into how BM25-based signals contribute to improving embeddings by fostering alignment and uniformity, highlighting the value of this approach in adapting models to domain-specific data. We release the source code1 for the research community." } ] } ], "index": 5 }, { "bbox": [ 68, 549, 154, 562 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 68, 549, 154, 562 ], "spans": [ { "bbox": [ 68, 549, 154, 562 ], "type": "text", "content": "1 Introduction" } ] } ], "index": 6 }, { "bbox": [ 67, 571, 292, 747 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 571, 292, 747 ], "spans": [ { "bbox": [ 67, 571, 292, 747 ], "type": "text", "content": "Text embeddings serve as a cornerstone for various AI applications, particularly in information retrieval and retrieval-augmented generation (RAG) systems (Izacard et al., 2022; Gao et al., 2023). With the widespread adoption of AI, companies like OpenAI and Cohere now provide general-purpose text embedding APIs, enabling organizations to quickly integrate AI into their RAG systems. However, while these general-purpose embedding models show impressive performance on generic benchmarks, they often face significant challenges when applied to private datasets, such as domain-specific or company-specific proprietary" } ] } ], "index": 7 }, { "type": "image", "bbox": [ 305, 216, 525, 311 ], "blocks": [ { "bbox": [ 305, 216, 525, 311 ], "lines": [ { "bbox": [ 305, 216, 525, 311 ], "spans": [ { "bbox": [ 305, 216, 525, 311 ], "type": "image", "image_path": "2a59125e2622f8ac106864ecb80f04f4a0cb3aac87c2837d23e5bf3adbe9f3cf.jpg" } ] } ], "index": 8, "angle": 0, "type": "image_body" }, { "bbox": [ 302, 318, 525, 343 ], "lines": [ { "bbox": [ 302, 318, 525, 343 ], "spans": [ { "bbox": [ 302, 318, 525, 343 ], "type": "text", "content": "Figure 1: An illustration of tailoring an embedding model to a private domain." } ] } ], "index": 9, "angle": 0, "type": "image_caption" } ], "index": 8 }, { "bbox": [ 302, 365, 525, 406 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 365, 525, 406 ], "spans": [ { "bbox": [ 302, 365, 525, 406 ], "type": "text", "content": "data, which often contain specialized terminology and jargon (Anderson et al., 2024; Tang and Yang, 2024a)." } ] } ], "index": 10 }, { "bbox": [ 302, 407, 526, 529 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 407, 526, 529 ], "spans": [ { "bbox": [ 302, 407, 526, 529 ], "type": "text", "content": "For instance, consider a pharmaceutical company that seeks to build a RAG system over its vast internal dataset. The company's employees may query the system for information about an internal product code (e.g., Product Code: PHX-121). However, general-purpose models, not trained on this proprietary dataset, may fail to properly interpret or retrieve relevant documents containing such specific terms, leading to suboptimal answers." } ] } ], "index": 11 }, { "bbox": [ 302, 530, 525, 706 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 530, 525, 706 ], "spans": [ { "bbox": [ 302, 530, 525, 706 ], "type": "text", "content": "Current practices in RAG systems often attempt to address this issue by combining traditional keyword-based retrieval with embedding-based retrieval. One popular hybrid approach is reciprocal rank fusion (RRF), which reranks results based on a mathematical formula without fine-tuning the underlying embedding model (Cormack et al., 2009). While simple and effective, RRF remains heuristic, with its effectiveness potentially limited by the lack of fine-tuning to the private dataset. This leads us to the following question: Can we fine-tune general-purpose embedding models to better align with private datasets?" } ] } ], "index": 12 }, { "bbox": [ 302, 708, 525, 775 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 708, 525, 775 ], "spans": [ { "bbox": [ 302, 708, 525, 775 ], "type": "text", "content": "One of the key challenges in adapting embedding models to domain-specific datasets is the lack of available tuning signals. While general-purpose embedding models are often trained on large, curated QA datasets using contrastive learning (Tan" } ] } ], "index": 13 } ], "discarded_blocks": [ { "bbox": [ 67, 752, 290, 774 ], "type": "page_footnote", "angle": 0, "lines": [ { "bbox": [ 67, 752, 290, 774 ], "spans": [ { "bbox": [ 67, 752, 290, 774 ], "type": "text", "content": "1The code is available at: https://github.com/BAileyWei/BMEmbed." } ] } ], "index": 14 }, { "bbox": [ 286, 781, 310, 791 ], "type": "page_number", "angle": 0, "lines": [ { "bbox": [ 286, 781, 310, 791 ], "spans": [ { "bbox": [ 286, 781, 310, 791 ], "type": "text", "content": "6856" } ] } ], "index": 15 }, { "bbox": [ 136, 795, 457, 818 ], "type": "footer", "angle": 0, "lines": [ { "bbox": [ 136, 795, 457, 818 ], "spans": [ { "bbox": [ 136, 795, 457, 818 ], "type": "text", "content": "Findings of the Association for Computational Linguistics: ACL 2025, pages 6856-6870 July 27 - August 1, 2025 ©2025 Association for Computational Linguistics" } ] } ], "index": 16 } ], "page_size": [ 595, 841 ], "page_idx": 0 }, { "para_blocks": [ { "bbox": [ 67, 71, 291, 164 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 71, 291, 164 ], "spans": [ { "bbox": [ 67, 71, 291, 164 ], "type": "text", "content": "et al., 2022; Zhou et al., 2022; Moreira et al., 2024), private datasets, which often consist of free-text data without annotations, pose a particular challenge. This leads to an important sub-question: How can we generate supervisory signals for adapting general-purpose embedding models to private, unlabeled datasets?" } ] } ], "index": 0 }, { "bbox": [ 69, 167, 291, 476 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 69, 167, 291, 476 ], "spans": [ { "bbox": [ 69, 167, 291, 476 ], "type": "text", "content": "In this work, we introduce BMEmbed, an automated framework designed to adapt general-purpose text embedding models to private datasets. Our method leverages BM25 (Robertson and Zaragoza, 2009), a well-established keyword-based retrieval function based on TF-IDF, to generate supervisory signals from the ranking of keyword-based retrieval results. The BMEmbed framework consists of three main components: (1) domain query generation, where a large language model generates synthetic queries based on domain-specific events extracted from the private corpus; (2) relevant sampling, which uses BM25 to retrieve lexically related paragraphs and samples from different intervals of the ranking list to ensure informative signals; and (3) listwise fine-tuning, where the embedding model is optimized using a listwise loss function on the curated ranking lists, fully leveraging the ranking supervision. Unlike traditional in-batch negative contrastive learning (van den Oord et al., 2018; Chen et al., 2020), our approach uses ranked BM25 results to guide the fine-tuning process." } ] } ], "index": 1 }, { "bbox": [ 67, 477, 291, 707 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 477, 291, 707 ], "spans": [ { "bbox": [ 67, 477, 291, 707 ], "type": "text", "content": "We evaluate BMEmb across multiple domains and datasets, using two general-purpose embedding models with varying scales. Compared to base embedding models, BMEmb consistently achieves substantial improvements in retrieval accuracy. Our experiments further show that BMEmb outperforms or achieves competitive performance compared to two commonly used techniques in current RAG systems: (1) fine-tuning with in-batch negative contrastive learning, and (2) the RRF hybrid approach. To better understand the inner workings of BMEmb, we investigate the alignment and uniformity properties of the adapted embeddings (Wang and Isola, 2020). We find that BMEmb successfully improves embedding uniformity while maintaining good alignment, leading to improved retrieval performance." } ] } ], "index": 2 }, { "bbox": [ 67, 708, 291, 775 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 708, 291, 775 ], "spans": [ { "bbox": [ 67, 708, 291, 775 ], "type": "text", "content": "In summary, this paper introduces a simple yet effective method for adapting general-purpose text embedding models to private datasets. Given the increasing adoption of RAG systems across industries, we believe our method provides a practical" } ] } ], "index": 3 }, { "bbox": [ 302, 71, 526, 112 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 71, 526, 112 ], "spans": [ { "bbox": [ 302, 71, 526, 112 ], "type": "text", "content": "solution to enhance domain specificity, leading to more accurate and contextually relevant retrieval results in real-world applications." } ] } ], "index": 4 }, { "bbox": [ 303, 121, 387, 135 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 303, 121, 387, 135 ], "spans": [ { "bbox": [ 303, 121, 387, 135 ], "type": "text", "content": "2 Background" } ] } ], "index": 5 }, { "bbox": [ 302, 142, 445, 156 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 302, 142, 445, 156 ], "spans": [ { "bbox": [ 302, 142, 445, 156 ], "type": "text", "content": "2.1 Text Embedding Models" } ] } ], "index": 6 }, { "bbox": [ 302, 160, 526, 389 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 160, 526, 389 ], "spans": [ { "bbox": [ 302, 160, 526, 389 ], "type": "text", "content": "Text embedding refers to the numerical representation of a piece of text that captures its semantic meaning, transforming texts of varying lengths into fixed-size vectors. Previously, fine-tuning models like BERT (Devlin et al., 2019) and T5 (Raffel et al., 2020) to adapt to embedding downstream tasks was the dominant approach (Reimers and Gurevych, 2019; Ni et al., 2022). However, with the development of LLMs, the landscape is shifting. The focus has now moved toward building LLM-based, general-purpose embedding models, including Qwen (Li et al., 2023), LLM2Vec (BehnamGhader et al., 2024), NV-Embed (Lee et al., 2024), etc. These LLM-based embedding models have demonstrated their superiority on massive text datasets, e.g., MTEB (Muennighoff et al., 2023)." } ] } ], "index": 7 }, { "bbox": [ 302, 391, 526, 633 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 391, 526, 633 ], "spans": [ { "bbox": [ 302, 391, 526, 633 ], "type": "text", "content": "Current embedding models (Izacard et al., 2022; Wang et al., 2022; Li et al., 2023; Chen et al., 2024; Tang and Yang, 2024c) are primarily trained using contrastive learning, with the widely adopted InfoNCE loss(van den Oord et al., 2018) as the objective, which aims to distinguish semantically relevant text pairs from irrelevant ones. While effective, the performance of contrastive learning heavily depends on the selection of high-quality positive and negative samples (Tan et al., 2022; Zhou et al., 2022; Moreira et al., 2024). When adapting the embedding model to a specific domain, constructing relevant and irrelevant samples from a private corpus can be a challenging task. In this work, we propose leveraging BM25 to construct lexically relevant samples, addressing the challenge of sample selection in an unsupervised manner." } ] } ], "index": 8 }, { "bbox": [ 302, 643, 485, 655 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 302, 643, 485, 655 ], "spans": [ { "bbox": [ 302, 643, 485, 655 ], "type": "text", "content": "2.2 Keyword-based Retrieval: BM25" } ] } ], "index": 9 }, { "bbox": [ 302, 661, 526, 741 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 661, 526, 741 ], "spans": [ { "bbox": [ 302, 661, 526, 741 ], "type": "text", "content": "BM25 (Robertson and Zaragoza, 2009) is a well-established retrieval method based on TF-IDF, which ranks documents by considering the uniqueness and significance of terms relevant to a given query. The BM25 score for document " }, { "bbox": [ 302, 661, 526, 741 ], "type": "inline_equation", "content": "d" }, { "bbox": [ 302, 661, 526, 741 ], "type": "text", "content": " with respect to query " }, { "bbox": [ 302, 661, 526, 741 ], "type": "inline_equation", "content": "q" }, { "bbox": [ 302, 661, 526, 741 ], "type": "text", "content": " is defined as:" } ] } ], "index": 10 }, { "bbox": [ 304, 750, 523, 777 ], "type": "interline_equation", "angle": 0, "lines": [ { "bbox": [ 304, 750, 523, 777 ], "spans": [ { "bbox": [ 304, 750, 523, 777 ], "type": "interline_equation", "content": "\\operatorname {B M 2 5} (d, q) = \\sum_ {t \\in q} \\operatorname {I D F} (t) \\cdot \\frac {f (t , d) \\cdot \\left(k _ {1} + 1\\right)}{f (t , d) + k _ {1} \\cdot \\left(1 - b + b \\cdot | \\hat {d} |\\right)}", "image_path": "d9827c346288a0ff053aedd929629b9c7749a085489d89bb23b1a44e558fe21f.jpg" } ] } ], "index": 11 } ], "discarded_blocks": [ { "bbox": [ 286, 780, 309, 791 ], "type": "page_number", "angle": 0, "lines": [ { "bbox": [ 286, 780, 309, 791 ], "spans": [ { "bbox": [ 286, 780, 309, 791 ], "type": "text", "content": "6857" } ] } ], "index": 12 } ], "page_size": [ 595, 841 ], "page_idx": 1 }, { "para_blocks": [ { "type": "image", "bbox": [ 117, 71, 478, 257 ], "blocks": [ { "bbox": [ 117, 71, 478, 257 ], "lines": [ { "bbox": [ 117, 71, 478, 257 ], "spans": [ { "bbox": [ 117, 71, 478, 257 ], "type": "image", "image_path": "73ed35abb0daf564401221c1e3a961f2d77a0349f1b23d00521d3ef6fbc4ac24.jpg" } ] } ], "index": 0, "angle": 0, "type": "image_body" }, { "bbox": [ 189, 268, 403, 280 ], "lines": [ { "bbox": [ 189, 268, 403, 280 ], "spans": [ { "bbox": [ 189, 268, 403, 280 ], "type": "text", "content": "Figure 2: An overview of the BMEdb framework." } ] } ], "index": 1, "angle": 0, "type": "image_caption" } ], "index": 0 }, { "bbox": [ 67, 301, 290, 436 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 301, 290, 436 ], "spans": [ { "bbox": [ 67, 301, 290, 436 ], "type": "text", "content": "where " }, { "bbox": [ 67, 301, 290, 436 ], "type": "inline_equation", "content": "f(t,d)" }, { "bbox": [ 67, 301, 290, 436 ], "type": "text", "content": " is the term frequency of term " }, { "bbox": [ 67, 301, 290, 436 ], "type": "inline_equation", "content": "t" }, { "bbox": [ 67, 301, 290, 436 ], "type": "text", "content": " in document " }, { "bbox": [ 67, 301, 290, 436 ], "type": "inline_equation", "content": "d" }, { "bbox": [ 67, 301, 290, 436 ], "type": "text", "content": ", " }, { "bbox": [ 67, 301, 290, 436 ], "type": "inline_equation", "content": "|\\hat{d}|" }, { "bbox": [ 67, 301, 290, 436 ], "type": "text", "content": " is the normalization of document length, " }, { "bbox": [ 67, 301, 290, 436 ], "type": "inline_equation", "content": "\\sum_{t\\in q}\\mathrm{IDF}(t)" }, { "bbox": [ 67, 301, 290, 436 ], "type": "text", "content": " is the inverse document frequency of term " }, { "bbox": [ 67, 301, 290, 436 ], "type": "inline_equation", "content": "t" }, { "bbox": [ 67, 301, 290, 436 ], "type": "text", "content": " in the corpus, " }, { "bbox": [ 67, 301, 290, 436 ], "type": "inline_equation", "content": "k_{1}" }, { "bbox": [ 67, 301, 290, 436 ], "type": "text", "content": " and " }, { "bbox": [ 67, 301, 290, 436 ], "type": "inline_equation", "content": "b" }, { "bbox": [ 67, 301, 290, 436 ], "type": "text", "content": " are hyper parameters that control the impact of term frequency and document length, respectively. Previous works have demonstrated the effectiveness of using BM25 as a weak supervision signal for training small models (Dehghani et al., 2017; Haddad and Ghosh, 2019; Karpukhin et al., 2020)." } ] } ], "index": 2 }, { "bbox": [ 67, 438, 291, 667 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 438, 291, 667 ], "spans": [ { "bbox": [ 67, 438, 291, 667 ], "type": "text", "content": "Despite significant progress in dense retrieval (Karpukhin et al., 2020; Xin et al., 2022), BM25 remains a robust retrieval algorithm. Its rule-based, keyword matching approach enables strong generalization, maintaining competitive performance in scenarios where keyword matching is more crucial than semantic matching. As a result, hybrid approaches, such as Reciprocal Rank Fusion (RRF) (Cormack et al., 2009), have been used to combine and rerank results from both dense retrieval models (embedding-based) and sparse retrieval models (BM25-based). However, RRF relies on heuristics to rank these hybrid results. In contrast, this paper aims to fine-tune general-purpose embedding models to a specific dataset, enabling true adaptation rather than simply combining results from different retrieval methods." } ] } ], "index": 3 }, { "bbox": [ 67, 682, 272, 710 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 67, 682, 272, 710 ], "spans": [ { "bbox": [ 67, 682, 272, 710 ], "type": "text", "content": "3 BMEnder: Domain Adaptation for General-Purpose Embeddings" } ] } ], "index": 4 }, { "bbox": [ 67, 721, 291, 775 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 721, 291, 775 ], "spans": [ { "bbox": [ 67, 721, 291, 775 ], "type": "text", "content": "In this section, we present BMEmbed, an automated framework designed to tailor general-purpose embedding models to private datasets consisting of unannotated text. The method contains" } ] } ], "index": 5 }, { "bbox": [ 302, 301, 525, 328 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 301, 525, 328 ], "spans": [ { "bbox": [ 302, 301, 525, 328 ], "type": "text", "content": "three steps, and the overall process is illustrated in Figure 2." } ] } ], "index": 6 }, { "bbox": [ 302, 338, 456, 352 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 302, 338, 456, 352 ], "spans": [ { "bbox": [ 302, 338, 456, 352 ], "type": "text", "content": "3.1 Domain Query Generation" } ] } ], "index": 7 }, { "bbox": [ 302, 358, 525, 413 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 358, 525, 413 ], "spans": [ { "bbox": [ 302, 358, 525, 413 ], "type": "text", "content": "The first step is to prompt an LLM (e.g., GPT-4) to generate synthetic queries focused on domain-specific events in the private corpus, rather than on general concepts." } ] } ], "index": 8 }, { "bbox": [ 302, 421, 525, 502 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 421, 525, 502 ], "spans": [ { "bbox": [ 302, 421, 525, 502 ], "type": "text", "content": "Event Extraction We require the LLM to extract all the events and their associated arguments from the private corpus. In addition, the original context from which the events are extracted is also generated, serving as the evidence for the queries used in the baseline method in subsequent experiments." } ] } ], "index": 9 }, { "bbox": [ 302, 511, 525, 565 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 511, 525, 565 ], "spans": [ { "bbox": [ 302, 511, 525, 565 ], "type": "text", "content": "Query Synthesis Then, we feed both the corpus and the extracted events into the LLM, prompting it to automatically generate queries " }, { "bbox": [ 302, 511, 525, 565 ], "type": "inline_equation", "content": "Q" }, { "bbox": [ 302, 511, 525, 565 ], "type": "text", "content": " for each event. The detailed prompts are provided in Appendix A." } ] } ], "index": 10 }, { "bbox": [ 302, 576, 468, 589 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 302, 576, 468, 589 ], "spans": [ { "bbox": [ 302, 576, 468, 589 ], "type": "text", "content": "3.2 Relevant Sampling via BM25" } ] } ], "index": 11 }, { "bbox": [ 302, 594, 524, 621 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 594, 524, 621 ], "spans": [ { "bbox": [ 302, 594, 524, 621 ], "type": "text", "content": "The second step is to construct ranked retrieval results using keyword retrieval method BM25." } ] } ], "index": 12 }, { "bbox": [ 302, 630, 525, 711 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 630, 525, 711 ], "spans": [ { "bbox": [ 302, 630, 525, 711 ], "type": "text", "content": "BM25 Searching We divide the private corpus into multiple chunks and calculate the BM25 score between query " }, { "bbox": [ 302, 630, 525, 711 ], "type": "inline_equation", "content": "q \\in Q" }, { "bbox": [ 302, 630, 525, 711 ], "type": "text", "content": " and each chunk. The top-" }, { "bbox": [ 302, 630, 525, 711 ], "type": "inline_equation", "content": "k" }, { "bbox": [ 302, 630, 525, 711 ], "type": "text", "content": " scoring chunks, denoted as " }, { "bbox": [ 302, 630, 525, 711 ], "type": "inline_equation", "content": "C = \\{c_1, c_2, \\ldots, c_k\\}" }, { "bbox": [ 302, 630, 525, 711 ], "type": "text", "content": ", are selected, where each chunk " }, { "bbox": [ 302, 630, 525, 711 ], "type": "inline_equation", "content": "c_i" }, { "bbox": [ 302, 630, 525, 711 ], "type": "text", "content": " is associated with its respective BM25 score " }, { "bbox": [ 302, 630, 525, 711 ], "type": "inline_equation", "content": "r_i" }, { "bbox": [ 302, 630, 525, 711 ], "type": "text", "content": "." } ] } ], "index": 13 }, { "bbox": [ 302, 721, 525, 775 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 721, 525, 775 ], "spans": [ { "bbox": [ 302, 721, 525, 775 ], "type": "text", "content": "Ranking List Partition We further partition " }, { "bbox": [ 302, 721, 525, 775 ], "type": "inline_equation", "content": "C" }, { "bbox": [ 302, 721, 525, 775 ], "type": "text", "content": " into " }, { "bbox": [ 302, 721, 525, 775 ], "type": "inline_equation", "content": "m" }, { "bbox": [ 302, 721, 525, 775 ], "type": "text", "content": " intervals, denoted as " }, { "bbox": [ 302, 721, 525, 775 ], "type": "inline_equation", "content": "\\{\\mathcal{P}_1,\\mathcal{P}_2,\\dots ,\\mathcal{P}_m\\}" }, { "bbox": [ 302, 721, 525, 775 ], "type": "text", "content": " This approach allows positives and negatives to be sampled from different intervals, which amplifies" } ] } ], "index": 14 } ], "discarded_blocks": [ { "bbox": [ 286, 780, 309, 791 ], "type": "page_number", "angle": 0, "lines": [ { "bbox": [ 286, 780, 309, 791 ], "spans": [ { "bbox": [ 286, 780, 309, 791 ], "type": "text", "content": "6858" } ] } ], "index": 15 } ], "page_size": [ 595, 841 ], "page_idx": 2 }, { "para_blocks": [ { "bbox": [ 66, 71, 293, 275 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 66, 71, 293, 275 ], "spans": [ { "bbox": [ 66, 71, 293, 275 ], "type": "text", "content": "the scope of sampling space across diverse relevance tiers, effectively mitigating noise in BM25 pseudo labels. The partitioning can follow either a uniform or a fine-to-coarse strategy. Uniform intervals divide the range of BM25 scores into equally sized segments, ensuring a consistent distribution of samples across all intervals. In contrast, fine-to-coarse partitioning strategy intervals prioritize finer segmentation of higher-relevance scores, leading to more granular sampling for positively ranked examples. For instance, given " }, { "bbox": [ 66, 71, 293, 275 ], "type": "inline_equation", "content": "m = 4" }, { "bbox": [ 66, 71, 293, 275 ], "type": "text", "content": ", the top-20 ranking list can be divided into intervals [0, 2), [2, 6), [6, 12), [12, 20) using the fine-to-coarse strategy, whereas the uniform strategy divides it into [0, 5), [5, 10), [10, 15), [15, 20)." } ] } ], "index": 0 }, { "bbox": [ 67, 280, 291, 338 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 280, 291, 338 ], "spans": [ { "bbox": [ 67, 280, 291, 338 ], "type": "text", "content": "Ranking-Based Sampling For each interval " }, { "bbox": [ 67, 280, 291, 338 ], "type": "inline_equation", "content": "\\mathcal{P}_j" }, { "bbox": [ 67, 280, 291, 338 ], "type": "text", "content": ", we randomly select one sample " }, { "bbox": [ 67, 280, 291, 338 ], "type": "inline_equation", "content": "p_j" }, { "bbox": [ 67, 280, 291, 338 ], "type": "text", "content": " along with its retrieval score " }, { "bbox": [ 67, 280, 291, 338 ], "type": "inline_equation", "content": "r_j" }, { "bbox": [ 67, 280, 291, 338 ], "type": "text", "content": ", forming a ranking list " }, { "bbox": [ 67, 280, 291, 338 ], "type": "inline_equation", "content": "[q, p_1, p_2, \\ldots, p_m, r_1, r_2, \\ldots, r_m]" }, { "bbox": [ 67, 280, 291, 338 ], "type": "text", "content": "." } ] } ], "index": 1 }, { "bbox": [ 67, 343, 195, 356 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 67, 343, 195, 356 ], "spans": [ { "bbox": [ 67, 343, 195, 356 ], "type": "text", "content": "3.3 Listwise Fine-Tuning" } ] } ], "index": 2 }, { "bbox": [ 66, 360, 291, 481 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 66, 360, 291, 481 ], "spans": [ { "bbox": [ 66, 360, 291, 481 ], "type": "text", "content": "Since BM25 retrieval results produce a ranked list, we hypothesize that this ranking contains valuable information that can be better utilized through a listwise training objective, rather than the commonly used in-batch negative contrastive learning objective, where ranking information is typically ignored. To this end, we employ a listwise training objective to fully leverage the ranking information obtained from BM25 retrieval." } ] } ], "index": 3 }, { "bbox": [ 67, 482, 291, 576 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 482, 291, 576 ], "spans": [ { "bbox": [ 67, 482, 291, 576 ], "type": "text", "content": "Given " }, { "bbox": [ 67, 482, 291, 576 ], "type": "inline_equation", "content": "[q, p_1, p_2, \\ldots, p_m, r_1, r_2, \\ldots, r_m]" }, { "bbox": [ 67, 482, 291, 576 ], "type": "text", "content": " and a base embedding model " }, { "bbox": [ 67, 482, 291, 576 ], "type": "inline_equation", "content": "e(\\cdot)" }, { "bbox": [ 67, 482, 291, 576 ], "type": "text", "content": ", we first obtain the embeddings of " }, { "bbox": [ 67, 482, 291, 576 ], "type": "inline_equation", "content": "q" }, { "bbox": [ 67, 482, 291, 576 ], "type": "text", "content": " and " }, { "bbox": [ 67, 482, 291, 576 ], "type": "inline_equation", "content": "p_j" }, { "bbox": [ 67, 482, 291, 576 ], "type": "text", "content": " for " }, { "bbox": [ 67, 482, 291, 576 ], "type": "inline_equation", "content": "j \\in [1, \\ldots, m]" }, { "bbox": [ 67, 482, 291, 576 ], "type": "text", "content": ", denoted as " }, { "bbox": [ 67, 482, 291, 576 ], "type": "inline_equation", "content": "e(q)" }, { "bbox": [ 67, 482, 291, 576 ], "type": "text", "content": " and " }, { "bbox": [ 67, 482, 291, 576 ], "type": "inline_equation", "content": "e(p_j)" }, { "bbox": [ 67, 482, 291, 576 ], "type": "text", "content": ", respectively. Then, we calculate the cosine similarity " }, { "bbox": [ 67, 482, 291, 576 ], "type": "inline_equation", "content": "s_j = \\mathrm{sim}(e(q), e(p_j))" }, { "bbox": [ 67, 482, 291, 576 ], "type": "text", "content": ". Following the work of ListNet (Cao et al., 2007), the listwise loss is calculated as follows:" } ] } ], "index": 4 }, { "bbox": [ 108, 580, 250, 617 ], "type": "interline_equation", "angle": 0, "lines": [ { "bbox": [ 108, 580, 250, 617 ], "spans": [ { "bbox": [ 108, 580, 250, 617 ], "type": "interline_equation", "content": "\\mathcal {L} (s, r) = - \\sum_ {q \\in Q} \\sum_ {j = 1} ^ {m} p _ {j} ^ {r} \\log (p _ {j} ^ {s})", "image_path": "61e90120de3f4becbd99f83b4936ecc51c37d2a32441032e9fa9980603a93f8d.jpg" } ] } ], "index": 5 }, { "bbox": [ 67, 622, 290, 688 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 622, 290, 688 ], "spans": [ { "bbox": [ 67, 622, 290, 688 ], "type": "text", "content": "where " }, { "bbox": [ 67, 622, 290, 688 ], "type": "inline_equation", "content": "r = \\{r_1, r_2, \\dots, r_m\\}" }, { "bbox": [ 67, 622, 290, 688 ], "type": "text", "content": ", " }, { "bbox": [ 67, 622, 290, 688 ], "type": "inline_equation", "content": "s = \\{s_1, s_2, \\dots, s_m\\}" }, { "bbox": [ 67, 622, 290, 688 ], "type": "text", "content": ", " }, { "bbox": [ 67, 622, 290, 688 ], "type": "inline_equation", "content": "p^r" }, { "bbox": [ 67, 622, 290, 688 ], "type": "text", "content": " and " }, { "bbox": [ 67, 622, 290, 688 ], "type": "inline_equation", "content": "p^s" }, { "bbox": [ 67, 622, 290, 688 ], "type": "text", "content": " are the distributions normalized by softmax over the " }, { "bbox": [ 67, 622, 290, 688 ], "type": "inline_equation", "content": "r" }, { "bbox": [ 67, 622, 290, 688 ], "type": "text", "content": " and " }, { "bbox": [ 67, 622, 290, 688 ], "type": "inline_equation", "content": "s" }, { "bbox": [ 67, 622, 290, 688 ], "type": "text", "content": ", respectively. We introduce a temperature scaling factor " }, { "bbox": [ 67, 622, 290, 688 ], "type": "inline_equation", "content": "\\alpha" }, { "bbox": [ 67, 622, 290, 688 ], "type": "text", "content": " on the target score list " }, { "bbox": [ 67, 622, 290, 688 ], "type": "inline_equation", "content": "r" }, { "bbox": [ 67, 622, 290, 688 ], "type": "text", "content": ", with:" } ] } ], "index": 6 }, { "bbox": [ 130, 687, 226, 719 ], "type": "interline_equation", "angle": 0, "lines": [ { "bbox": [ 130, 687, 226, 719 ], "spans": [ { "bbox": [ 130, 687, 226, 719 ], "type": "interline_equation", "content": "p _ {j} ^ {r} = \\frac {\\exp \\left(\\frac {r _ {j}}{\\alpha}\\right)}{\\sum_ {i = 1} ^ {m} \\exp \\left(\\frac {r _ {i}}{\\alpha}\\right)}", "image_path": "d248fc9f5785f4662f9e17fa3d974541cdeb7ee57e25c0f79fe46776134b7084.jpg" } ] } ], "index": 7 }, { "bbox": [ 67, 721, 291, 775 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 721, 291, 775 ], "spans": [ { "bbox": [ 67, 721, 291, 775 ], "type": "text", "content": "Here, " }, { "bbox": [ 67, 721, 291, 775 ], "type": "inline_equation", "content": "\\alpha" }, { "bbox": [ 67, 721, 291, 775 ], "type": "text", "content": " controls the sharpness of the target distribution, with smaller values leading to a more concentrated distribution, and larger values resulting in a smoother distribution." } ] } ], "index": 8 }, { "type": "table", "bbox": [ 309, 68, 520, 166 ], "blocks": [ { "bbox": [ 309, 68, 520, 166 ], "lines": [ { "bbox": [ 309, 68, 520, 166 ], "spans": [ { "bbox": [ 309, 68, 520, 166 ], "type": "table", "html": "
DatasetMultihopFinanceLegalBench
evaluation queries2,2554981,676
corpus tokens1,453k840k7,109k
synthesized queries5,9721,009685
chunk size2561,0241,024
k1,0001,0004,000
m966
", "image_path": "1107fa5679c4d8596fedbc64d0e75573d9487a9d50bd7a23aa8892b6bd9eed45.jpg" } ] } ], "index": 9, "angle": 0, "type": "table_body" } ], "index": 9 }, { "bbox": [ 302, 173, 525, 196 ], "lines": [ { "bbox": [ 302, 173, 525, 196 ], "spans": [ { "bbox": [ 302, 173, 525, 196 ], "type": "text", "content": "Table 1: Statistics and implementation details of the datasets." } ] } ], "index": 10, "angle": 0, "type": "text" }, { "bbox": [ 302, 218, 485, 231 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 302, 218, 485, 231 ], "spans": [ { "bbox": [ 302, 218, 485, 231 ], "type": "text", "content": "4 How does BMEmbed Perform?" } ] } ], "index": 11 }, { "bbox": [ 302, 239, 425, 253 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 302, 239, 425, 253 ], "spans": [ { "bbox": [ 302, 239, 425, 253 ], "type": "text", "content": "4.1 Experimental Setup" } ] } ], "index": 12 }, { "bbox": [ 301, 257, 526, 351 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 301, 257, 526, 351 ], "spans": [ { "bbox": [ 301, 257, 526, 351 ], "type": "text", "content": "Base Embedding Models We use the following two general-purpose embedding models: gteQwen2-1.5B-instruct², a small yet strong model, and e5-mistral-7B-instruct³, a larger model based on Mistral-7B. Both two models perform competitively on the MTEB leaderboard (Muennighoff et al., 2023)." } ] } ], "index": 13 }, { "bbox": [ 301, 359, 526, 522 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 301, 359, 526, 522 ], "spans": [ { "bbox": [ 301, 359, 526, 522 ], "type": "text", "content": "Baselines We compare models fine-tuned by BMEdb with the following methods: 1) BM25, with parameters " }, { "bbox": [ 301, 359, 526, 522 ], "type": "inline_equation", "content": "k_{1} = 1.2" }, { "bbox": [ 301, 359, 526, 522 ], "type": "text", "content": " and " }, { "bbox": [ 301, 359, 526, 522 ], "type": "inline_equation", "content": "b = 0.75" }, { "bbox": [ 301, 359, 526, 522 ], "type": "text", "content": "; 2) Base, the base embedding model. 3) CL, the embedding model fine-tuned using contrastive objective InfoNCE loss (van den Oord et al., 2018), where LLM-generated evidence is used as positives (as detailed in Section 3.1), along with in-batch negatives. 4) RRF, Reciprocal Rank Fusion (Cormack et al., 2009), which is a hybrid search method combining rankings from multiple sources into a unified ranking:" } ] } ], "index": 14 }, { "bbox": [ 355, 529, 473, 561 ], "type": "interline_equation", "angle": 0, "lines": [ { "bbox": [ 355, 529, 473, 561 ], "spans": [ { "bbox": [ 355, 529, 473, 561 ], "type": "interline_equation", "content": "R R F (d) = \\sum_ {a \\in A} \\frac {1}{u + a (d)}", "image_path": "6bd9528bbfc182f78e0637bf85279482eb7f40eba9a96ee3d4dbec0f49906e7a.jpg" } ] } ], "index": 15 }, { "bbox": [ 302, 569, 525, 650 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 569, 525, 650 ], "spans": [ { "bbox": [ 302, 569, 525, 650 ], "type": "text", "content": "where " }, { "bbox": [ 302, 569, 525, 650 ], "type": "inline_equation", "content": "d" }, { "bbox": [ 302, 569, 525, 650 ], "type": "text", "content": " is a document, " }, { "bbox": [ 302, 569, 525, 650 ], "type": "inline_equation", "content": "A" }, { "bbox": [ 302, 569, 525, 650 ], "type": "text", "content": " is the set of rankers (retrievers), " }, { "bbox": [ 302, 569, 525, 650 ], "type": "inline_equation", "content": "a(d)" }, { "bbox": [ 302, 569, 525, 650 ], "type": "text", "content": " is the rank of document " }, { "bbox": [ 302, 569, 525, 650 ], "type": "inline_equation", "content": "d" }, { "bbox": [ 302, 569, 525, 650 ], "type": "text", "content": " in ranker " }, { "bbox": [ 302, 569, 525, 650 ], "type": "inline_equation", "content": "a" }, { "bbox": [ 302, 569, 525, 650 ], "type": "text", "content": ", and " }, { "bbox": [ 302, 569, 525, 650 ], "type": "inline_equation", "content": "u" }, { "bbox": [ 302, 569, 525, 650 ], "type": "text", "content": " is a constant set to 40. Here we combine BM25 rankings with the base embedding model. 5) RRF+BMEmbed, the combination of the BM25 and the BMEnder-finetuned model." } ] } ], "index": 16 }, { "bbox": [ 302, 657, 525, 739 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 657, 525, 739 ], "spans": [ { "bbox": [ 302, 657, 525, 739 ], "type": "text", "content": "\"Private\" Datasets In our experiments, we choose three publicly available retrieval datasets as evaluation benchmarks. However, these datasets are released after the base embedding models, meaning the models are unlikely to have been trained on them. Therefore, while the datasets are" } ] } ], "index": 17 } ], "discarded_blocks": [ { "bbox": [ 304, 745, 525, 761 ], "type": "page_footnote", "angle": 0, "lines": [ { "bbox": [ 304, 745, 525, 761 ], "spans": [ { "bbox": [ 304, 745, 525, 761 ], "type": "text", "content": "2 https://huggingface.co/Alibaba-NLP/gte-Qwen2-1.5B-instruet" } ] } ], "index": 18 }, { "bbox": [ 317, 762, 512, 773 ], "type": "page_footnote", "angle": 0, "lines": [ { "bbox": [ 317, 762, 512, 773 ], "spans": [ { "bbox": [ 317, 762, 512, 773 ], "type": "text", "content": "3https://huggingface.co/intfloat/e5-mistral-7b-instruct" } ] } ], "index": 19 }, { "bbox": [ 286, 780, 309, 791 ], "type": "page_number", "angle": 0, "lines": [ { "bbox": [ 286, 780, 309, 791 ], "spans": [ { "bbox": [ 286, 780, 309, 791 ], "type": "text", "content": "6859" } ] } ], "index": 21 } ], "page_size": [ 595, 841 ], "page_idx": 3 }, { "para_blocks": [ { "type": "table", "bbox": [ 70, 68, 523, 228 ], "blocks": [ { "bbox": [ 70, 68, 523, 228 ], "lines": [ { "bbox": [ 70, 68, 523, 228 ], "spans": [ { "bbox": [ 70, 68, 523, 228 ], "type": "table", "html": "
MethodMultihop-RAGFinance-RAGLegalBench-RAG
Hit@1Hit@4Hit@10MAP@10Hit@1Hit@4Hit@10MAP@10Hit@1Hit@4Hit@10MAP@10
BM2541.0665.0179.0225.9328.5146.1857.4337.460.127.5814.621.62
Qwen2-1.5B
Base33.9759.6976.5022.2223.6941.3753.8232.848.0016.6523.096.34
CL31.5355.9674.7221.4825.5043.5758.4335.206.4417.9025.485.45
BMEmb40.5868.3483.0626.5426.3145.3857.0336.218.9520.6428.527.47
RRF38.7666.3082.0425.8031.7349.8063.4540.978.4718.3224.766.45
RRF+BMEmb43.2871.0984.3528.3031.7351.6164.4641.629.4319.6928.467.19
e5-mistral-7B
Base29.4954.9975.3920.3319.2836.5548.8028.107.7617.4223.756.48
CL21.1148.3469.4016.6724.3046.7957.4335.087.8816.6521.065.37
BMEmb45.2871.4985.6327.6028.1148.3962.2538.409.9619.0327.277.08
RRF42.2667.5882.1327.0430.7247.3961.8539.559.7919.0924.347.23
RRF+BMEmb45.7271.4485.7228.3632.3352.2164.0641.929.9619.0327.277.08
", "image_path": "5eaf7bf6c57df665e53c5ce5a39662958606436a2f1c4fa159d92466446b2898.jpg" } ] } ], "index": 0, "angle": 0, "type": "table_body" } ], "index": 0 }, { "bbox": [ 67, 236, 525, 261 ], "lines": [ { "bbox": [ 67, 236, 525, 261 ], "spans": [ { "bbox": [ 67, 236, 525, 261 ], "type": "text", "content": "Table 2: Retrieval performance of different methods across three datasets. Best results are highlighted for each embedding model on each dataset." } ] } ], "index": 1, "angle": 0, "type": "text" }, { "bbox": [ 67, 282, 290, 322 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 282, 290, 322 ], "spans": [ { "bbox": [ 67, 282, 290, 322 ], "type": "text", "content": "publicly available, they effectively simulate \"private\" datasets in our experiments, also ensuring fair comparison and reproducibility." } ] } ], "index": 2 }, { "bbox": [ 67, 322, 291, 498 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 322, 291, 498 ], "spans": [ { "bbox": [ 67, 322, 291, 498 ], "type": "text", "content": "Specifically, the three datasets are: Multihop-RAG (Tang and Yang, 2024b), a multi-hop question answering (QA) dataset from the financial news domain; Finance-RAG4, a long-context QA dataset based on financial reports, released as part of the ACM-ICAIF'24 FinanceRAG competition; and LegalBench-RAG (Pipitone and Alami, 2024), a challenging long-context legal domain QA dataset. Each dataset contains questions, their corresponding relevant evidence, and the original corpus. We use the evidence as the label to evaluate the retrieval performance. Detailed statistics are provided in Table 1." } ] } ], "index": 3 }, { "bbox": [ 67, 506, 291, 749 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 506, 291, 749 ], "spans": [ { "bbox": [ 67, 506, 291, 749 ], "type": "text", "content": "Implementation and Training Details For domain query generation, we use GPT-4o for accurate event extraction and GPT-4o-mini for query synthesis to minimize costs. We generate 5,972, 1,009, and 685 queries for Multihop-RAG, Finance-Bench, and Legal-Bench, respectively, based on corpus size. A real case, including the input corpus, intermediate events, and the final generated query, is showcased in Appendix B. During relevant sampling, we set the chunk size of 256 for Multihop-RAG and 1,024 for the other two datasets with long context. The fine-to-coarse partitioning strategy is used by default. We adopt " }, { "bbox": [ 67, 506, 291, 749 ], "type": "inline_equation", "content": "m = 9" }, { "bbox": [ 67, 506, 291, 749 ], "type": "text", "content": " for Multihop-RAG and " }, { "bbox": [ 67, 506, 291, 749 ], "type": "inline_equation", "content": "m = 6" }, { "bbox": [ 67, 506, 291, 749 ], "type": "text", "content": " for the others, with " }, { "bbox": [ 67, 506, 291, 749 ], "type": "inline_equation", "content": "k = 1,000" }, { "bbox": [ 67, 506, 291, 749 ], "type": "text", "content": " for MultiHop-RAG and Finance-RAG, and " }, { "bbox": [ 67, 506, 291, 749 ], "type": "inline_equation", "content": "k = 4,000" }, { "bbox": [ 67, 506, 291, 749 ], "type": "text", "content": " for LegalBench-RAG. The impact of different " }, { "bbox": [ 67, 506, 291, 749 ], "type": "inline_equation", "content": "m" }, { "bbox": [ 67, 506, 291, 749 ], "type": "text", "content": " and partitioning strategies is further discussed in Section 5.2. The results under different " }, { "bbox": [ 67, 506, 291, 749 ], "type": "inline_equation", "content": "k" }, { "bbox": [ 67, 506, 291, 749 ], "type": "text", "content": " are shown" } ] } ], "index": 4 }, { "bbox": [ 302, 282, 526, 430 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 282, 526, 430 ], "spans": [ { "bbox": [ 302, 282, 526, 430 ], "type": "text", "content": "in Appendix D. For finetuning, we use a fixed batch size of 16 for CL, while the batch size is equivalent to " }, { "bbox": [ 302, 282, 526, 430 ], "type": "inline_equation", "content": "m" }, { "bbox": [ 302, 282, 526, 430 ], "type": "text", "content": " for BMEdb. The temperature " }, { "bbox": [ 302, 282, 526, 430 ], "type": "inline_equation", "content": "\\alpha" }, { "bbox": [ 302, 282, 526, 430 ], "type": "text", "content": " is set to a moderate value between 1.0 to 5.0, with further adjustments on different datasets and models, which we provide a detailed discussion in Section 5.3. We finetune the model using LoRA (Hu et al., 2022) with a rank of 16 for 1,000 steps. Training Qwen on " }, { "bbox": [ 302, 282, 526, 430 ], "type": "inline_equation", "content": "4 \\times 3090" }, { "bbox": [ 302, 282, 526, 430 ], "type": "text", "content": " GPUs takes about 1.5 hours, while training e5-mistral on " }, { "bbox": [ 302, 282, 526, 430 ], "type": "inline_equation", "content": "8 \\times \\mathrm{H}800" }, { "bbox": [ 302, 282, 526, 430 ], "type": "text", "content": " GPUs takes approximately one hour." } ] } ], "index": 5 }, { "bbox": [ 302, 443, 437, 454 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 302, 443, 437, 454 ], "spans": [ { "bbox": [ 302, 443, 437, 454 ], "type": "text", "content": "4.2 Results and Discussion" } ] } ], "index": 6 }, { "bbox": [ 302, 462, 525, 502 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 462, 525, 502 ], "spans": [ { "bbox": [ 302, 462, 525, 502 ], "type": "text", "content": "Table 2 presents the experimental results of BMEbed and all baselines across two embedding models and three datasets. It can be observed:" } ] } ], "index": 7 }, { "bbox": [ 302, 503, 526, 718 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 503, 526, 718 ], "spans": [ { "bbox": [ 302, 503, 526, 718 ], "type": "text", "content": "1) The vanilla embedding models perform suboptimally in specific domains. In most cases, base models underperform BM25 on Multihop-RAG and Finance-RAG, even with large model sizes. This finding highlights the necessity of further adaptation when applying general-purpose embedding models to specific domains. Furthermore, the BMEdb consistently outperform BM25 across models and datasets, despite being trained with supervisory signals derived from BM25. This demonstrates that BMEdb is not merely mimicking BM25. Instead, we treat BM25 as a weak lexical teacher and design both our sampling strategy and training objective to guide the model toward learning relevance information beyond BM25's direct outputs." } ] } ], "index": 8 }, { "bbox": [ 302, 721, 525, 775 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 721, 525, 775 ], "spans": [ { "bbox": [ 302, 721, 525, 775 ], "type": "text", "content": "2) Contrastive learning does not consistently lead to performance improvements for embedding model adaptation. Surprisingly, we find that applying CL to base models do not always improve" } ] } ], "index": 9 } ], "discarded_blocks": [ { "bbox": [ 67, 755, 290, 774 ], "type": "page_footnote", "angle": 0, "lines": [ { "bbox": [ 67, 755, 290, 774 ], "spans": [ { "bbox": [ 67, 755, 290, 774 ], "type": "text", "content": "4 https://www.kaggle.com/competitions/icaif-24-finance-rag -challenge" } ] } ], "index": 10 }, { "bbox": [ 286, 781, 309, 791 ], "type": "page_number", "angle": 0, "lines": [ { "bbox": [ 286, 781, 309, 791 ], "spans": [ { "bbox": [ 286, 781, 309, 791 ], "type": "text", "content": "6860" } ] } ], "index": 11 } ], "page_size": [ 595, 841 ], "page_idx": 4 }, { "para_blocks": [ { "type": "image", "bbox": [ 92, 71, 264, 204 ], "blocks": [ { "bbox": [ 92, 71, 264, 204 ], "lines": [ { "bbox": [ 92, 71, 264, 204 ], "spans": [ { "bbox": [ 92, 71, 264, 204 ], "type": "image", "image_path": "b4d1991a09bcd4a7ba9b3859ba0cba7cdda8ec318452b4717ed686c88008ea69.jpg" } ] } ], "index": 0, "angle": 0, "type": "image_body" }, { "bbox": [ 67, 216, 291, 240 ], "lines": [ { "bbox": [ 67, 216, 291, 240 ], "spans": [ { "bbox": [ 67, 216, 291, 240 ], "type": "text", "content": "Figure 3: Retrieval performance of MAP@10 for different " }, { "bbox": [ 67, 216, 291, 240 ], "type": "inline_equation", "content": "m" }, { "bbox": [ 67, 216, 291, 240 ], "type": "text", "content": " and sampling strategies." } ] } ], "index": 1, "angle": 0, "type": "image_caption" } ], "index": 0 }, { "type": "image", "bbox": [ 92, 254, 265, 389 ], "blocks": [ { "bbox": [ 92, 254, 265, 389 ], "lines": [ { "bbox": [ 92, 254, 265, 389 ], "spans": [ { "bbox": [ 92, 254, 265, 389 ], "type": "image", "image_path": "fe99850626317a3cfba1028b22239cc56875e961dce4cc57eb9a2a76d051ef94.jpg" } ] } ], "index": 2, "angle": 0, "type": "image_body" }, { "bbox": [ 67, 400, 289, 424 ], "lines": [ { "bbox": [ 67, 400, 289, 424 ], "spans": [ { "bbox": [ 67, 400, 289, 424 ], "type": "text", "content": "Figure 4: Alignment and uniformity for different " }, { "bbox": [ 67, 400, 289, 424 ], "type": "inline_equation", "content": "m" }, { "bbox": [ 67, 400, 289, 424 ], "type": "text", "content": " and sampling strategies." } ] } ], "index": 3, "angle": 0, "type": "image_caption" } ], "index": 2 }, { "bbox": [ 67, 448, 290, 543 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 448, 290, 543 ], "spans": [ { "bbox": [ 67, 448, 290, 543 ], "type": "text", "content": "performance. We hypothesize that noise in the positive evidence generated by the LLM might interfere with model optimization. This indicates that contrastive learning is sensitive to the quality of positive and negative samples, and such an approach does not always result in promising improvements for embedding adaptation." } ] } ], "index": 4 }, { "bbox": [ 67, 544, 291, 775 ], "type": "list", "angle": 0, "index": 7, "blocks": [ { "bbox": [ 67, 544, 290, 692 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 544, 290, 692 ], "spans": [ { "bbox": [ 67, 544, 290, 692 ], "type": "text", "content": "3) Our BMEdb consistently delivers improvements, benefiting from the supervision signals provided by BM25. Our framework boosts the base models across all embedding models and datasets, especially on the metrics Hit@4. Compared to RRF which combines BM25 ranking information with dense retrieval from embedding models, BMEbed achieves a remarkable improvement, which illustrates that our framework deeply deciphers the ranking confidence signals from BM25, achieving a better embedding model adaptation." } ] } ], "index": 5 }, { "bbox": [ 67, 693, 291, 775 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 693, 291, 775 ], "spans": [ { "bbox": [ 67, 693, 291, 775 ], "type": "text", "content": "4) Furthermore, BMEdb can be combined with other hybrid retrieval methods to achieve further enhancement. This is demonstrated in experiments comparing RRF+BMEdb with RRF alone. In most cases, RRF+BMEdb shows clear performance gains, except in the case of LegalBench" } ] } ], "index": 6 } ], "sub_type": "text" }, { "type": "image", "bbox": [ 332, 71, 498, 204 ], "blocks": [ { "bbox": [ 332, 71, 498, 204 ], "lines": [ { "bbox": [ 332, 71, 498, 204 ], "spans": [ { "bbox": [ 332, 71, 498, 204 ], "type": "image", "image_path": "3b4e29d885195f454bd5d888d374c0bdc8f75c5c0886023ae8399b88382b3116.jpg" } ] } ], "index": 8, "angle": 0, "type": "image_body" }, { "bbox": [ 302, 215, 526, 238 ], "lines": [ { "bbox": [ 302, 215, 526, 238 ], "spans": [ { "bbox": [ 302, 215, 526, 238 ], "type": "text", "content": "Figure 5: Retrieval performance of MAP@10 for different " }, { "bbox": [ 302, 215, 526, 238 ], "type": "inline_equation", "content": "\\alpha" }, { "bbox": [ 302, 215, 526, 238 ], "type": "text", "content": "." } ] } ], "index": 9, "angle": 0, "type": "image_caption" } ], "index": 8 }, { "type": "image", "bbox": [ 328, 253, 499, 391 ], "blocks": [ { "bbox": [ 328, 253, 499, 391 ], "lines": [ { "bbox": [ 328, 253, 499, 391 ], "spans": [ { "bbox": [ 328, 253, 499, 391 ], "type": "image", "image_path": "a71fc43b30e02fa9723aea0e7b0836f8db9b4cec0aa8ff7aa8f57bb7e87a7a71.jpg" } ] } ], "index": 10, "angle": 0, "type": "image_body" }, { "bbox": [ 309, 401, 518, 413 ], "lines": [ { "bbox": [ 309, 401, 518, 413 ], "spans": [ { "bbox": [ 309, 401, 518, 413 ], "type": "text", "content": "Figure 6: Alignment and uniformity for different " }, { "bbox": [ 309, 401, 518, 413 ], "type": "inline_equation", "content": "\\alpha" }, { "bbox": [ 309, 401, 518, 413 ], "type": "text", "content": "." } ] } ], "index": 11, "angle": 0, "type": "image_caption" } ], "index": 10 }, { "bbox": [ 302, 435, 525, 476 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 435, 525, 476 ], "spans": [ { "bbox": [ 302, 435, 525, 476 ], "type": "text", "content": "RAG, where the BM25 baseline performs poorly and BMEbed+RRF does not achieve further performance gains." } ] } ], "index": 12 }, { "bbox": [ 302, 486, 506, 499 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 302, 486, 506, 499 ], "spans": [ { "bbox": [ 302, 486, 506, 499 ], "type": "text", "content": "4.3 Generality under Alternative Settings" } ] } ], "index": 13 }, { "bbox": [ 301, 503, 525, 611 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 301, 503, 525, 611 ], "spans": [ { "bbox": [ 301, 503, 525, 611 ], "type": "text", "content": "To further explore the generality of the BMEmbed framework, we conduct additional experiments under three different settings: (1) applying BMEmbed to a smaller embedding model, (2) replacing the loss function in listwise fine-tuning, and (3) evaluating the adapted embedding model on other embedding task. Full experimental setups and results are provided in Appendix C." } ] } ], "index": 14 }, { "bbox": [ 302, 612, 526, 747 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 612, 526, 747 ], "spans": [ { "bbox": [ 302, 612, 526, 747 ], "type": "text", "content": "In Setting 1, we choose all-MiniLM-L6-v2" }, { "bbox": [ 302, 612, 526, 747 ], "type": "inline_equation", "content": "^5" }, { "bbox": [ 302, 612, 526, 747 ], "type": "text", "content": " from the Sentence Transformers family as a smaller embedding model. We observe that even small model can achieve performance comparable to larger general-purpose model after BMEdb adaptation, while requiring significantly fewer computational resources and training time. This highlights the practicality and efficiency of our framework in resource-constrained scenarios. In Setting 2, we replace the cross-entropy loss with a" } ] } ], "index": 15 } ], "discarded_blocks": [ { "bbox": [ 303, 755, 524, 773 ], "type": "page_footnote", "angle": 0, "lines": [ { "bbox": [ 303, 755, 524, 773 ], "spans": [ { "bbox": [ 303, 755, 524, 773 ], "type": "text", "content": "5 https://huggingface.co/sentence-transformers/all-MiniLM-L 6-v2" } ] } ], "index": 16 }, { "bbox": [ 286, 781, 308, 791 ], "type": "page_number", "angle": 0, "lines": [ { "bbox": [ 286, 781, 308, 791 ], "spans": [ { "bbox": [ 286, 781, 308, 791 ], "type": "text", "content": "6861" } ] } ], "index": 17 } ], "page_size": [ 595, 841 ], "page_idx": 5 }, { "para_blocks": [ { "bbox": [ 67, 71, 293, 262 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 71, 293, 262 ], "spans": [ { "bbox": [ 67, 71, 293, 262 ], "type": "text", "content": "maximum likelihood loss, ListMLE (Xia et al., 2008) in listwise fine-tuning. The adapted model still shows consistent improvements, demonstrating that BMEdb is robust across different listwise training objectives. In Setting 3, we evaluate the adapted embedding Qwen2-1.5B on FinSTS(Liu et al., 2024), a semantic textual similarity task. Despite being trained solely on listwise signals derived from BM25 rankings and without any direct supervision on the STS task, the adapted models achieve noticeable improvements. This suggests that our approach effectively captures domain-specific semantic nuances, further highlighting its broader utility." } ] } ], "index": 0 }, { "bbox": [ 67, 269, 289, 312 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 67, 269, 289, 312 ], "spans": [ { "bbox": [ 67, 269, 289, 312 ], "type": "text", "content": "5 Why BMEnder Enhances Embedding Adaptation? An Investigation from Uniformity and Alignment" } ] } ], "index": 1 }, { "bbox": [ 67, 318, 292, 561 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 318, 292, 561 ], "spans": [ { "bbox": [ 67, 318, 292, 561 ], "type": "text", "content": "In this section, we further investigate why BMEmbed leads to improvements. We conduct ablation experiments to study how our samplers and temperature interact with retrieval performance. Moreover, we introduce the Alignment and Uniformity properties, which reflect the quality of the embedding, to gain a deeper theoretical understanding. The reported experiments are based on the Multihop-RAG dataset and the Qwen2-1.5B model by default. The complete ablation study setup and results are presented in Appendix D. As observed in the ablation study, our experiments empirically reveal a strong agreement between embedding properties and retrieval performance, suggesting that the enhancement from BMEmbed results from the optimized embedding properties. Here, we discuss our key observations and conclusions." } ] } ], "index": 2 }, { "bbox": [ 67, 570, 221, 584 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 67, 570, 221, 584 ], "spans": [ { "bbox": [ 67, 570, 221, 584 ], "type": "text", "content": "5.1 Alignment and Uniformity" } ] } ], "index": 3 }, { "bbox": [ 67, 587, 292, 709 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 587, 292, 709 ], "spans": [ { "bbox": [ 67, 587, 292, 709 ], "type": "text", "content": "A good embedding should bring similar data points closer together while preserving as much useful information as possible (Bachman et al., 2019; Hjelm et al., 2019) to distinguish different data points, leading to lower alignment and higher uniformity. Here, we adopt alignment and uniformity for evaluating an embedding following the work of Wang and Isola (2020), with further details and discussion in Appendix E." } ] } ], "index": 4 }, { "bbox": [ 67, 717, 273, 729 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 67, 717, 273, 729 ], "spans": [ { "bbox": [ 67, 717, 273, 729 ], "type": "text", "content": "5.2 Ablation Study of Different Partitions" } ] } ], "index": 5 }, { "bbox": [ 67, 735, 292, 775 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 735, 292, 775 ], "spans": [ { "bbox": [ 67, 735, 292, 775 ], "type": "text", "content": "To explore the effect of different partitions during relevant sampling via BM25 in BMEnder, we investigate the impact of various partition factors," } ] } ], "index": 6 }, { "bbox": [ 302, 71, 526, 138 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 71, 526, 138 ], "spans": [ { "bbox": [ 302, 71, 526, 138 ], "type": "text", "content": "including the number of partitions and the partitioning strategies. Specifically, we conduct experiments with " }, { "bbox": [ 302, 71, 526, 138 ], "type": "inline_equation", "content": "m" }, { "bbox": [ 302, 71, 526, 138 ], "type": "text", "content": " ranging from 6 to 10, using both uniform and fine-to-coarse sampling strategies, with the temperature " }, { "bbox": [ 302, 71, 526, 138 ], "type": "inline_equation", "content": "\\alpha" }, { "bbox": [ 302, 71, 526, 138 ], "type": "text", "content": " set to 1 and " }, { "bbox": [ 302, 71, 526, 138 ], "type": "inline_equation", "content": "k" }, { "bbox": [ 302, 71, 526, 138 ], "type": "text", "content": " set to 1,000." } ] } ], "index": 7 }, { "bbox": [ 302, 141, 527, 423 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 141, 527, 423 ], "spans": [ { "bbox": [ 302, 141, 527, 423 ], "type": "text", "content": "Figure 3 shows the relationship between retrieval metrics MAP@10 and fine-tuning with different m and sampling strategies, while Figure 4 presents a comparison of uniformity and alignment of the fine-tuning models shown in previous figure. We observe that the fine-to-coarse strategy achieves better retrieval performance and superior alignment compared to the uniform strategy. In contrast, the uniform strategy is suboptimal in retrieval performance due to its overly uniform embedding distribution, which leads to a loss of alignment. In addition, as " }, { "bbox": [ 302, 141, 527, 423 ], "type": "inline_equation", "content": "m" }, { "bbox": [ 302, 141, 527, 423 ], "type": "text", "content": " increases from 6 to 7 under the fine-to-coarse sampling strategy, we observe a measurable improvement in MAP@10 performance, suggesting that moderately expanding the sampling scope captures more relevant items. However, further increasing " }, { "bbox": [ 302, 141, 527, 423 ], "type": "inline_equation", "content": "m" }, { "bbox": [ 302, 141, 527, 423 ], "type": "text", "content": " causes performance fluctuations and a gradual decline in overall effectiveness. These findings highlight the importance of carefully calibrating " }, { "bbox": [ 302, 141, 527, 423 ], "type": "inline_equation", "content": "m" }, { "bbox": [ 302, 141, 527, 423 ], "type": "text", "content": " to optimize retrieval performance." } ] } ], "index": 8 }, { "bbox": [ 302, 440, 515, 468 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 302, 440, 515, 468 ], "spans": [ { "bbox": [ 302, 440, 515, 468 ], "type": "text", "content": "5.3 Ablation Study of Listwise Fine-Tuning with Varying Temperatures" } ] } ], "index": 9 }, { "bbox": [ 302, 476, 526, 543 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 476, 526, 543 ], "spans": [ { "bbox": [ 302, 476, 526, 543 ], "type": "text", "content": "We examine the effect of varying temperatures " }, { "bbox": [ 302, 476, 526, 543 ], "type": "inline_equation", "content": "\\alpha" }, { "bbox": [ 302, 476, 526, 543 ], "type": "text", "content": ". For convenience, we work with its reciprocal, " }, { "bbox": [ 302, 476, 526, 543 ], "type": "inline_equation", "content": "1 / \\alpha" }, { "bbox": [ 302, 476, 526, 543 ], "type": "text", "content": " with values of 0.1, 0.2, 0.5, 0.7, and 1.0. We set " }, { "bbox": [ 302, 476, 526, 543 ], "type": "inline_equation", "content": "k = 500" }, { "bbox": [ 302, 476, 526, 543 ], "type": "text", "content": ", " }, { "bbox": [ 302, 476, 526, 543 ], "type": "inline_equation", "content": "m = 10" }, { "bbox": [ 302, 476, 526, 543 ], "type": "text", "content": ", and adopt the fine-to-coarse sampling strategy." } ] } ], "index": 10 }, { "bbox": [ 302, 544, 527, 775 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 544, 527, 775 ], "spans": [ { "bbox": [ 302, 544, 527, 775 ], "type": "text", "content": "Figure 5 shows the trend between MAP@10 and fine-tuning with different " }, { "bbox": [ 302, 544, 527, 775 ], "type": "inline_equation", "content": "1 / \\alpha" }, { "bbox": [ 302, 544, 527, 775 ], "type": "text", "content": ", with the corresponding alignment and uniformity results shown in Figure 6. Our analysis shows that smaller temperature achieve better retrieval performance by fostering good uniformity in the embedding distribution. In contrast, as temperature increases, uniformity decreases, even lowering it compared to the base model. This is because the higher temperature smooths the label distribution, which diminishes the distinction between learning samples and causes the embeddings to become overly clustered. Such clustering may hurt the performance of downstream tasks which require clear distinction between embeddings, as observed in our experiments, where it led to a degradation in retrieval performance." } ] } ], "index": 11 } ], "discarded_blocks": [ { "bbox": [ 286, 781, 310, 791 ], "type": "page_number", "angle": 0, "lines": [ { "bbox": [ 286, 781, 310, 791 ], "spans": [ { "bbox": [ 286, 781, 310, 791 ], "type": "text", "content": "6862" } ] } ], "index": 12 } ], "page_size": [ 595, 841 ], "page_idx": 6 }, { "para_blocks": [ { "type": "table", "bbox": [ 94, 68, 500, 208 ], "blocks": [ { "bbox": [ 94, 68, 500, 208 ], "lines": [ { "bbox": [ 94, 68, 500, 208 ], "spans": [ { "bbox": [ 94, 68, 500, 208 ], "type": "table", "html": "
MethodMultihop-RAGFinance-RAGLegalBench-RAG
Alignment↓Uniformity↑Alignment↓Uniformity↑Alignment↓Uniformity↑
Qwen2-1.5B
Base1.24222.76651.15621.65671.32031.1599
CL1.35162.80221.21882.94372.00002.2382
BMEmbed1.20313.32661.14842.66311.66912.1426
e5-mistral-7B
Base1.18751.74301.17971.03531.28910.7317
CL1.51562.76491.32813.04452.79691.7913
BMEmbed1.17973.77681.08593.21441.67971.6182
", "image_path": "19927aef93f7abbedc7858892a21e1143132a8e4dfb51ff56bfc6f8ae65c8da6.jpg" } ] } ], "index": 0, "angle": 0, "type": "table_body" } ], "index": 0 }, { "type": "table", "bbox": [ 94, 251, 503, 366 ], "blocks": [ { "bbox": [ 67, 216, 525, 241 ], "lines": [ { "bbox": [ 67, 216, 525, 241 ], "spans": [ { "bbox": [ 67, 216, 525, 241 ], "type": "text", "content": "Table 3: Alignment and Uniformity of Embedding Models. Lower alignment (↓) and higher uniformity (↑) are preferred. Best results are highlighted for each embedding model on each dataset." } ] } ], "index": 1, "angle": 0, "type": "table_caption" }, { "bbox": [ 94, 251, 503, 366 ], "lines": [ { "bbox": [ 94, 251, 503, 366 ], "spans": [ { "bbox": [ 94, 251, 503, 366 ], "type": "table", "html": "
Original QueryMasked QuerySubstituted Query
What variables are considered on top of the value at 1 Jan-uary when calculating the value at 31 December for government grants that are included within trade and other payables?What variables are considered on top of the value at [MASK] when calculating the value at 31 December for [MASK] [MASK] that are included within [MASK] and other [MASK] ?What variables are consid-ered on top of the value at New Year's Day when calculu-lating the value at 31 December for public subsidies that are included within commerce and other liabilities ?
", "image_path": "b73d90c0cdafc83572e85545a481939add565554d78f08174d4e9733ca6f72a5.jpg" } ] } ], "index": 2, "angle": 0, "type": "table_body" } ], "index": 2 }, { "bbox": [ 161, 374, 430, 386 ], "lines": [ { "bbox": [ 161, 374, 430, 386 ], "spans": [ { "bbox": [ 161, 374, 430, 386 ], "type": "text", "content": "Table 4: A comparative example of three query perturbation types." } ] } ], "index": 3, "angle": 0, "type": "text" }, { "bbox": [ 67, 407, 263, 435 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 67, 407, 263, 435 ], "spans": [ { "bbox": [ 67, 407, 263, 435 ], "type": "text", "content": "5.4 BMEnder Balances Alignment and Uniformity Optimization" } ] } ], "index": 4 }, { "bbox": [ 67, 446, 291, 568 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 446, 291, 568 ], "spans": [ { "bbox": [ 67, 446, 291, 568 ], "type": "text", "content": "Our ablation experiment and analysis have demonstrated that using the fine-to-coarse strategy with a smaller temperature is an effective way to leverage BM25, supported by both theoretical reasoning and practical results. Since main experiment we conducted in Section 4.2 is based on this strategy, here we report the uniformity and alignment of corresponding fine-tuned embedding models in Table 3 for further analysis." } ] } ], "index": 5 }, { "bbox": [ 67, 571, 291, 775 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 571, 291, 775 ], "spans": [ { "bbox": [ 67, 571, 291, 775 ], "type": "text", "content": "Embedding models fine-tuned with BMEmbed achieve better retrieval results due to increased uniformity compared to the base model, while maintaining relatively low alignment. Comparing with CL with in-batch negatives, we observe that although uniformity has increased significantly, it does not effectively maintain or improve the alignment of the base model. This imbalance leads to instability in retrieval performance, and in some cases, even performance degradation. Specifically, we identify the ideal optimization direction, as indicated by the red arrow in the in Figure 4. BMEdb achieves this theoretical direction on both Multihop-RAG and Finance-RAG, demonstrating its potential to balance the optimization of" } ] } ], "index": 6 }, { "bbox": [ 303, 407, 442, 421 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 303, 407, 442, 421 ], "spans": [ { "bbox": [ 303, 407, 442, 421 ], "type": "text", "content": "both uniformity and alignment." } ] } ], "index": 7 }, { "bbox": [ 302, 436, 490, 491 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 302, 436, 490, 491 ], "spans": [ { "bbox": [ 302, 436, 490, 491 ], "type": "text", "content": "6 How Does BM25 Signal Boost Embedding? Integrating Lexical Sensitivity with Semantic Generalization" } ] } ], "index": 8 }, { "bbox": [ 301, 502, 526, 706 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 301, 502, 526, 706 ], "spans": [ { "bbox": [ 301, 502, 526, 706 ], "type": "text", "content": "While BMEmbed is fine-tuned using weak supervision signals derived from BM25, it remains unclear what specific capabilities this adaptation imparts to the embedding model. To investigate this, we design a set of controlled experiments and introduce a two-part decomposition of model behavior: semantic generalization, defined as the ability to capture general semantic patterns, and lexical sensitivity, defined as the sensitivity to domain-specific lexical cues. To assess how BMEmbed balances these two aspects, we conduct three groups of query perturbation experiments: (1) retrieval using original queries, (2) retrieval with domain-specific keywords masked, and (3) retrieval with query keywords substituted by synonyms." } ] } ], "index": 9 }, { "bbox": [ 302, 708, 526, 775 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 708, 526, 775 ], "spans": [ { "bbox": [ 302, 708, 526, 775 ], "type": "text", "content": "The experimental pipeline involves three stages. First, we use an LLM to extract domain-specific keywords and generate semantically appropriate synonyms for each query (see prompt details in Appendix F). Next, we create two perturbed versions" } ] } ], "index": 10 } ], "discarded_blocks": [ { "bbox": [ 286, 781, 309, 791 ], "type": "page_number", "angle": 0, "lines": [ { "bbox": [ 286, 781, 309, 791 ], "spans": [ { "bbox": [ 286, 781, 309, 791 ], "type": "text", "content": "6863" } ] } ], "index": 11 } ], "page_size": [ 595, 841 ], "page_idx": 7 }, { "para_blocks": [ { "type": "table", "bbox": [ 94, 68, 503, 337 ], "blocks": [ { "bbox": [ 94, 68, 503, 337 ], "lines": [ { "bbox": [ 94, 68, 503, 337 ], "spans": [ { "bbox": [ 94, 68, 503, 337 ], "type": "table", "html": "
ModelPerturbation MethodHit@1Hit@4Hit@10MAP@10
BM25original28.5146.1857.4337.46
masked0.40(↓28.11)6.65(↓39.53)11.29(↓46.14)3.17(↓34.29)
substituted5.85(↓22.66)12.10(↓34.08)16.13(↓41.30)8.94(↓28.52)
Qwen2-1.5Boriginal23.6941.3753.8232.84
masked2.41(↓21.28)4.62(↓36.75)5.82(↓48.00)3.31(↓29.53)
substituted8.87(↓14.82)17.14(↓24.23)24.40(↓29.42)13.08(↓19.76)
Qwen2-1.5B + BMEmbedoriginal26.3145.3857.0336.21
masked2.21(↓24.10)4.42(↓40.96)8.63(↓48.40)3.76(↓32.45)
substituted9.27(↓17.04)18.95(↓26.43)26.81(↓30.22)14.30(↓21.91)
", "image_path": "bfb475a914e75fcaa6f6980a5bdb87e9b0848e3ce8d06fde197077b94f490abb.jpg" } ] } ], "index": 0, "angle": 0, "type": "table_body" } ], "index": 0 }, { "bbox": [ 121, 343, 471, 356 ], "lines": [ { "bbox": [ 121, 343, 471, 356 ], "spans": [ { "bbox": [ 121, 343, 471, 356 ], "type": "text", "content": "Table 5: Controlled Retrieval Experiments with Query Perturbations on Finance-RAG." } ] } ], "index": 1, "angle": 0, "type": "text" }, { "bbox": [ 67, 377, 291, 513 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 377, 291, 513 ], "spans": [ { "bbox": [ 67, 377, 291, 513 ], "type": "text", "content": "of each query by either masking the identified keywords or substituting them with their corresponding synonyms (examples shown in Table 4). Finally, we evaluate model performance across these variants to assess the impact of each perturbation type. The evaluation is conducted on the FinanceRAG dataset using three methods: BM25, the base Qwen2-1.5B embedding model, and Qwen2-1.5B fine-tuned with BMEnder. As shown in Table 5, the results provide several insights of BMEnder:" } ] } ], "index": 2 }, { "bbox": [ 67, 515, 291, 775 ], "type": "list", "angle": 0, "index": 5, "blocks": [ { "bbox": [ 67, 515, 291, 677 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 515, 291, 677 ], "spans": [ { "bbox": [ 67, 515, 291, 677 ], "type": "text", "content": "1) Semantic Generalization: Compared to BM25, BMEdb exhibits significantly less performance drop under synonym substitution (Hit@10 drop: 30.22 vs. 41.30), indicating stronger semantic generalization. Notably, even when compared to the base Qwen2-1.5B model, BMEdb achieves slightly better absolute performance (Hit@10: 26.81 vs. 24.40) despite experiencing a similar level of performance drop (Hit@10 drop: 30.22 vs. 29.42). This suggests that our fine-tuning process not only preserves but also enhances the model's semantic generalization ability." } ] } ], "index": 3 }, { "bbox": [ 67, 680, 291, 775 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 680, 291, 775 ], "spans": [ { "bbox": [ 67, 680, 291, 775 ], "type": "text", "content": "2) Lexical Sensitivity: Under keyword masking, BMEdb shows a larger performance drop than the base model (Hit@4 drop: 40.96 vs. 36.75), implying that BMEdb has become more sensitive to domain-specific lexical cues, especially to the high ranking items. This indicates that while BMEdb preserves semantic understanding, it" } ] } ], "index": 4 } ], "sub_type": "text" }, { "bbox": [ 302, 377, 526, 391 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 377, 526, 391 ], "spans": [ { "bbox": [ 302, 377, 526, 391 ], "type": "text", "content": "also better incorporates keyword-level information." } ] } ], "index": 6 }, { "bbox": [ 302, 393, 526, 475 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 393, 526, 475 ], "spans": [ { "bbox": [ 302, 393, 526, 475 ], "type": "text", "content": "These results suggest that BMEdb effectively combines the strengths of both lexical and semantic information. This dual capability makes it particularly well-suited for domains that require adaptation to specialized terminology, or for proprietary enterprise datasets." } ] } ], "index": 7 }, { "bbox": [ 303, 491, 381, 504 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 303, 491, 381, 504 ], "spans": [ { "bbox": [ 303, 491, 381, 504 ], "type": "text", "content": "7 Conclusion" } ] } ], "index": 8 }, { "bbox": [ 301, 517, 526, 774 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 301, 517, 526, 774 ], "spans": [ { "bbox": [ 301, 517, 526, 774 ], "type": "text", "content": "With the growing adoption of AI in real-world applications, particularly RAG systems, adapting general-purpose models to domain-specific data remains a critical challenge. In this paper, we present BMEmbed, a novel method for adapting text embedding models to private datasets (e.g., company-specific proprietary data). Since private datasets often contain specialized terminology and domain-specific language, we leverage keyword-based retrieval as a supervisory signal to fine-tune general-purpose embedding models. Experimental results demonstrate that BMEmbed effectively enhances retrieval performance, producing more accurate query results on private datasets. As AI continues to transform industries, we hope that our proposed method can further advance the adoption and adaptation of AI in domain-specific applications, ensuring more effective and contextually relevant retrieval." } ] } ], "index": 9 } ], "discarded_blocks": [ { "bbox": [ 286, 781, 310, 791 ], "type": "page_number", "angle": 0, "lines": [ { "bbox": [ 286, 781, 310, 791 ], "spans": [ { "bbox": [ 286, 781, 310, 791 ], "type": "text", "content": "6864" } ] } ], "index": 10 } ], "page_size": [ 595, 841 ], "page_idx": 8 }, { "para_blocks": [ { "bbox": [ 68, 71, 149, 83 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 68, 71, 149, 83 ], "spans": [ { "bbox": [ 68, 71, 149, 83 ], "type": "text", "content": "8 Limitations" } ] } ], "index": 0 }, { "bbox": [ 69, 95, 291, 391 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 69, 95, 291, 391 ], "spans": [ { "bbox": [ 69, 95, 291, 391 ], "type": "text", "content": "This study has several limitations that present opportunities for future research. First, our current method primarily focuses on the retrieval task in embedding models. However, text embeddings are also widely used in domain-specific NLP tasks such as clustering and semantic textual similarity (STS). An interesting direction for future research is exploring task-specific supervisory signals to better adapt general-purpose embedding models to private datasets for applications beyond retrieval, including clustering and STS. Second, while our method aims to develop embedding models tailored to private datasets (such as company-specific proprietary data), we evaluate it on public datasets. These datasets are chosen because they are released after the base embedding models we assess, ensuring fair comparison and public reproducibility. However, applying this method to proprietary datasets in real-world RAG scenarios remains an important next step. We hope future research will explore these practical applications to further validate and refine our approach." } ] } ], "index": 1 }, { "bbox": [ 68, 405, 161, 419 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 68, 405, 161, 419 ], "spans": [ { "bbox": [ 68, 405, 161, 419 ], "type": "text", "content": "Acknowledgment" } ] } ], "index": 2 }, { "bbox": [ 67, 428, 291, 481 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 428, 291, 481 ], "spans": [ { "bbox": [ 67, 428, 291, 481 ], "type": "text", "content": "This work is partially supported by a research grant provided by HSBC. We also thank the anonymous reviewers for their thoughtful and constructive comments." } ] } ], "index": 3 }, { "bbox": [ 68, 508, 127, 521 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 68, 508, 127, 521 ], "spans": [ { "bbox": [ 68, 508, 127, 521 ], "type": "text", "content": "References" } ] } ], "index": 4 }, { "bbox": [ 69, 529, 291, 773 ], "type": "list", "angle": 0, "index": 8, "blocks": [ { "bbox": [ 69, 529, 291, 618 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 69, 529, 291, 618 ], "spans": [ { "bbox": [ 69, 529, 291, 618 ], "type": "text", "content": "Peter Anderson, Mano Vikash Janardhanan, Jason He, Wei Cheng, and Charlie Flanagan. 2024. Greenback bears and fiscal hawks: Finance is a jungle and text embeddings must adapt. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: EMNLP 2024 - Industry Track, Miami, Florida, USA, November 12-16, 2024, pages 362-370. Association for Computational Linguistics." } ] } ], "index": 5 }, { "bbox": [ 69, 629, 291, 708 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 69, 629, 291, 708 ], "spans": [ { "bbox": [ 69, 629, 291, 708 ], "type": "text", "content": "Philip Bachman, R. Devon Hjelm, and William Buchwalter. 2019. Learning representations by maximizing mutual information across views. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 15509-15519." } ] } ], "index": 6 }, { "bbox": [ 69, 719, 290, 773 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 69, 719, 290, 773 ], "spans": [ { "bbox": [ 69, 719, 290, 773 ], "type": "text", "content": "Parishad BehnamGhader, Vaibhav Adlakha, Marius Mosbach, Dzmitry Bahdanau, Nicolas Chapados, and Siva Reddy. 2024. Llm2vec: Large language models are secretly powerful text encoders. arXiv preprint arXiv:2404.05961." } ] } ], "index": 7 } ], "sub_type": "ref_text" }, { "bbox": [ 304, 72, 527, 774 ], "type": "list", "angle": 0, "index": 17, "blocks": [ { "bbox": [ 304, 72, 527, 150 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 304, 72, 527, 150 ], "spans": [ { "bbox": [ 304, 72, 527, 150 ], "type": "text", "content": "Zhe Cao, Tao Qin, Tie-Yan Liu, Ming-Feng Tsai, and Hang Li. 2007. Learning to rank: from pairwise approach to listwise approach. In Machine Learning, Proceedings of the Twenty-Fourth International Conference (ICML 2007), Corvallis, Oregon, USA, June 20-24, 2007, volume 227 of ACM International Conference Proceeding Series, pages 129-136. ACM." } ] } ], "index": 9 }, { "bbox": [ 304, 157, 527, 257 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 304, 157, 527, 257 ], "spans": [ { "bbox": [ 304, 157, 527, 257 ], "type": "text", "content": "Jianlyu Chen, Shitao Xiao, Peitian Zhang, Kun Luo, Defu Lian, and Zheng Liu. 2024. M3-embedding: Multi-linguality, multi-functionality, multi-granularity text embeddings through self-knowledge distillation. In Findings of the Association for Computational Linguistics, ACL 2024, Bangkok, Thailand and virtual meeting, August 11-16, 2024, pages 2318-2335. Association for Computational Linguistics." } ] } ], "index": 10 }, { "bbox": [ 304, 263, 527, 341 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 304, 263, 527, 341 ], "spans": [ { "bbox": [ 304, 263, 527, 341 ], "type": "text", "content": "Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey E. Hinton. 2020. A simple framework for contrastive learning of visual representations. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pages 1597-1607. PMLR." } ] } ], "index": 11 }, { "bbox": [ 304, 348, 527, 425 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 304, 348, 527, 425 ], "spans": [ { "bbox": [ 304, 348, 527, 425 ], "type": "text", "content": "Gordon V. Cormack, Charles L. A. Clarke, and Stefan Böttcher. 2009. Reciprocal rank fusion outperforms condorcet and individual rank learning methods. In Proceedings of the 32nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2009, Boston, MA, USA, July 19-23, 2009, pages 758-759. ACM." } ] } ], "index": 12 }, { "bbox": [ 304, 432, 527, 509 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 304, 432, 527, 509 ], "spans": [ { "bbox": [ 304, 432, 527, 509 ], "type": "text", "content": "Mostafa Dehghani, Hamed Zamani, Aliaksei Severyn, Jaap Kamps, and W. Bruce Croft. 2017. Neural ranking models with weak supervision. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, Shinjuku, Tokyo, Japan, August 7-11, 2017, pages 65-74. ACM." } ] } ], "index": 13 }, { "bbox": [ 304, 517, 527, 628 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 304, 517, 527, 628 ], "spans": [ { "bbox": [ 304, 517, 527, 628 ], "type": "text", "content": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171-4186. Association for Computational Linguistics." } ] } ], "index": 14 }, { "bbox": [ 304, 634, 527, 712 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 304, 634, 527, 712 ], "spans": [ { "bbox": [ 304, 634, 527, 712 ], "type": "text", "content": "Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. Simcse: Simple contrastive learning of sentence embeddings. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November; 2021, pages 6894-6910. Association for Computational Linguistics." } ] } ], "index": 15 }, { "bbox": [ 304, 719, 527, 774 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 304, 719, 527, 774 ], "spans": [ { "bbox": [ 304, 719, 527, 774 ], "type": "text", "content": "Yunfan Gao, Yun Xiong, Xinyu Gao, Kangxiang Jia, Jinliu Pan, Yuxi Bi, Yi Dai, Jiawei Sun, Qianyu Guo, Meng Wang, and Haofen Wang. 2023. Retrievalaugmented generation for large language models: A survey. arXiv preprint arXiv:2312.10997." } ] } ], "index": 16 } ], "sub_type": "ref_text" } ], "discarded_blocks": [ { "bbox": [ 286, 781, 309, 791 ], "type": "page_number", "angle": 0, "lines": [ { "bbox": [ 286, 781, 309, 791 ], "spans": [ { "bbox": [ 286, 781, 309, 791 ], "type": "text", "content": "6865" } ] } ], "index": 18 } ], "page_size": [ 595, 841 ], "page_idx": 9 }, { "para_blocks": [ { "bbox": [ 69, 72, 289, 773 ], "type": "list", "angle": 0, "index": 10, "blocks": [ { "bbox": [ 69, 72, 289, 148 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 69, 72, 289, 148 ], "spans": [ { "bbox": [ 69, 72, 289, 148 ], "type": "text", "content": "Dany Haddad and Joydeep Ghosh. 2019. Learning more from less: Towards strengthening weak supervision for ad-hoc retrieval. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2019, Paris, France, July 21-25, 2019, pages 857-860. ACM." } ] } ], "index": 0 }, { "bbox": [ 69, 157, 289, 235 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 69, 157, 289, 235 ], "spans": [ { "bbox": [ 69, 157, 289, 235 ], "type": "text", "content": "R. Devon Hjelm, Alex Fedorov, Samuel Lavoie-Marchildon, Karan Grewal, Philip Bachman, Adam Trischler, and Yoshua Bengio. 2019. Learning deep representations by mutual information estimation and maximization. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net." } ] } ], "index": 1 }, { "bbox": [ 69, 243, 289, 309 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 69, 243, 289, 309 ], "spans": [ { "bbox": [ 69, 243, 289, 309 ], "type": "text", "content": "Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2022. Lora: Low-rank adaptation of large language models. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net." } ] } ], "index": 2 }, { "bbox": [ 69, 317, 289, 371 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 69, 317, 289, 371 ], "spans": [ { "bbox": [ 69, 317, 289, 371 ], "type": "text", "content": "Gautier Izacard, Mathilde Caron, Lucas Hosseini, Sebastian Riedel, Piotr Bojanowski, Armand Joulin, and Edouard Grave. 2022. Unsupervised dense information retrieval with contrastive learning. Trans. Mach. Learn. Res., 2022." } ] } ], "index": 3 }, { "bbox": [ 69, 380, 289, 468 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 69, 380, 289, 468 ], "spans": [ { "bbox": [ 69, 380, 289, 468 ], "type": "text", "content": "Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick S. H. Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 6769-6781. Association for Computational Linguistics." } ] } ], "index": 4 }, { "bbox": [ 69, 476, 289, 532 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 69, 476, 289, 532 ], "spans": [ { "bbox": [ 69, 476, 289, 532 ], "type": "text", "content": "Chankyu Lee, Rajarshi Roy, Mengyao Xu, Jonathan Raiman, Mohammad Shoeybi, Bryan Catanzaro, and Wei Ping. 2024. Nv-embed: Improved techniques for training llms as generalist embedding models. arXiv preprint arXiv:2405.17428." } ] } ], "index": 5 }, { "bbox": [ 69, 539, 289, 584 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 69, 539, 289, 584 ], "spans": [ { "bbox": [ 69, 539, 289, 584 ], "type": "text", "content": "Zehan Li, Xin Zhang, Yanzhao Zhang, Dingkun Long, Pengjun Xie, and Meishan Zhang. 2023. Towards general text embeddings with multi-stage contrastive learning. arXiv preprint arXiv:2308.03281." } ] } ], "index": 6 }, { "bbox": [ 69, 592, 289, 646 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 69, 592, 289, 646 ], "spans": [ { "bbox": [ 69, 592, 289, 646 ], "type": "text", "content": "Jiaxin Liu, Yi Yang, and Kar Yan Tam. 2024. Beyond surface similarity: Detecting subtle semantic shifts in financial narratives. In Findings of the Association for Computational Linguistics: NAACL 2024, pages 2641-2652." } ] } ], "index": 7 }, { "bbox": [ 69, 655, 289, 710 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 69, 655, 289, 710 ], "spans": [ { "bbox": [ 69, 655, 289, 710 ], "type": "text", "content": "Gabriel de Souza P Moreira, Radek Osmulski, Mengyao Xu, Ronay Ak, Benedikt Schifferer, and Even Oldridge. 2024. Nv-retriever: Improving text embedding models with effective hard-negative mining. arXiv preprint arXiv:2407.15831." } ] } ], "index": 8 }, { "bbox": [ 69, 718, 289, 773 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 69, 718, 289, 773 ], "spans": [ { "bbox": [ 69, 718, 289, 773 ], "type": "text", "content": "Niklas Muennighoff, Nouamane Tazi, Loic Magne, and Nils Reimers. 2023. MTEB: massive text embedding benchmark. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2023, Dubrovnik, Croatia," } ] } ], "index": 9 } ], "sub_type": "ref_text" }, { "bbox": [ 304, 72, 525, 772 ], "type": "list", "angle": 0, "index": 23, "blocks": [ { "bbox": [ 314, 72, 524, 95 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 314, 72, 524, 95 ], "spans": [ { "bbox": [ 314, 72, 524, 95 ], "type": "text", "content": "May 2-6, 2023, pages 2006-2029. Association for Computational Linguistics." } ] } ], "index": 11 }, { "bbox": [ 304, 106, 525, 183 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 304, 106, 525, 183 ], "spans": [ { "bbox": [ 304, 106, 525, 183 ], "type": "text", "content": "Jianmo Ni, Gustavo Hernández Ábrego, Noah Constant, Ji Ma, Keith B. Hall, Daniel Cer, and Yinfei Yang. 2022. Sentence-t5: Scalable sentence encoders from pre-trained text-to-text models. In Findings of the Association for Computational Linguistics: ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 1864-1874. Association for Computational Linguistics." } ] } ], "index": 12 }, { "bbox": [ 304, 193, 525, 238 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 304, 193, 525, 238 ], "spans": [ { "bbox": [ 304, 193, 525, 238 ], "type": "text", "content": "Nicholas Pipitone and Ghita Houir Alami. 2024. Legalbench-rag: A benchmark for retrieval-augmented generation in the legal domain. arXiv preprint arXiv:2408.10343." } ] } ], "index": 13 }, { "bbox": [ 304, 248, 525, 303 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 304, 248, 525, 303 ], "spans": [ { "bbox": [ 304, 248, 525, 303 ], "type": "text", "content": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21:140:1-140:67." } ] } ], "index": 14 }, { "bbox": [ 304, 313, 525, 402 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 304, 313, 525, 402 ], "spans": [ { "bbox": [ 304, 313, 525, 402 ], "type": "text", "content": "Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 3980-3990. Association for Computational Linguistics." } ] } ], "index": 15 }, { "bbox": [ 304, 412, 525, 446 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 304, 412, 525, 446 ], "spans": [ { "bbox": [ 304, 412, 525, 446 ], "type": "text", "content": "Stephen E. Robertson and Hugo Zaragoza. 2009. The probabilistic relevance framework: BM25 and beyond. Found. Trends Inf. Retr., 3(4):333-389." } ] } ], "index": 16 }, { "bbox": [ 304, 455, 525, 533 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 304, 455, 525, 533 ], "spans": [ { "bbox": [ 304, 455, 525, 533 ], "type": "text", "content": "Haochen Tan, Wei Shao, Han Wu, Ke Yang, and Linqi Song. 2022. A sentence is worth 128 pseudo tokens: A semantic-aware contrastive learning framework for sentence embeddings. In Findings of the Association for Computational Linguistics: ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 246-256. Association for Computational Linguistics." } ] } ], "index": 17 }, { "bbox": [ 304, 544, 525, 577 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 304, 544, 525, 577 ], "spans": [ { "bbox": [ 304, 544, 525, 577 ], "type": "text", "content": "Yixuan Tang and Yi Yang. 2024a. Do we need domain-specific embedding models? an empirical investigation. arXiv preprint arXiv:2409.18511." } ] } ], "index": 18 }, { "bbox": [ 304, 587, 525, 621 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 304, 587, 525, 621 ], "spans": [ { "bbox": [ 304, 587, 525, 621 ], "type": "text", "content": "Yixuan Tang and Yi Yang. 2024b. Multihop-rag: Benchmarking retrieval-augmented generation for multi-hop queries. arXiv preprint arXiv:2401.15391." } ] } ], "index": 19 }, { "bbox": [ 304, 631, 525, 664 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 304, 631, 525, 664 ], "spans": [ { "bbox": [ 304, 631, 525, 664 ], "type": "text", "content": "Yixuan Tang and Yi Yang. 2024c. Pooling and attention: What are effective designs for llm-based embedding models? Preprint, arXiv:2409.02727." } ] } ], "index": 20 }, { "bbox": [ 304, 675, 525, 708 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 304, 675, 525, 708 ], "spans": [ { "bbox": [ 304, 675, 525, 708 ], "type": "text", "content": "Aäron van den Oord, Yazhe Li, and Oriol Vinyals. 2018. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748." } ] } ], "index": 21 }, { "bbox": [ 304, 718, 525, 772 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 304, 718, 525, 772 ], "spans": [ { "bbox": [ 304, 718, 525, 772 ], "type": "text", "content": "Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, and Furu Wei. 2022. Text embeddings by weakly-supervised contrastive pre-training. arXiv preprint arXiv:2212.03533." } ] } ], "index": 22 } ], "sub_type": "ref_text" } ], "discarded_blocks": [ { "bbox": [ 286, 781, 309, 791 ], "type": "page_number", "angle": 0, "lines": [ { "bbox": [ 286, 781, 309, 791 ], "spans": [ { "bbox": [ 286, 781, 309, 791 ], "type": "text", "content": "6866" } ] } ], "index": 24 } ], "page_size": [ 595, 841 ], "page_idx": 10 }, { "para_blocks": [ { "bbox": [ 69, 72, 291, 149 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 69, 72, 291, 149 ], "spans": [ { "bbox": [ 69, 72, 291, 149 ], "type": "text", "content": "Tongzhou Wang and Phillip Isola. 2020. Understanding contrastive representation learning through alignment and uniformity on the hypersphere. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pages 9929-9939. PMLR." } ] } ], "index": 0 }, { "bbox": [ 69, 157, 290, 214 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 69, 157, 290, 214 ], "spans": [ { "bbox": [ 69, 157, 290, 214 ], "type": "text", "content": "Fen Xia, Tie-Yan Liu, Jue Wang, Wensheng Zhang, and Hang Li. 2008. Listwise approach to learning to rank: theory and algorithm. In Proceedings of the 25th international conference on Machine learning, pages 1192-1199." } ] } ], "index": 1 }, { "bbox": [ 69, 222, 291, 299 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 69, 222, 291, 299 ], "spans": [ { "bbox": [ 69, 222, 291, 299 ], "type": "text", "content": "Ji Xin, Chenyan Xiong, Ashwin Srinivasan, Ankita Sharma, Damien Jose, and Paul Bennett. 2022. Zero-shot dense retrieval with momentum adversarial domain invariant representations. In Findings of the Association for Computational Linguistics: ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 4008-4020. Association for Computational Linguistics." } ] } ], "index": 2 }, { "bbox": [ 69, 306, 291, 396 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 69, 306, 291, 396 ], "spans": [ { "bbox": [ 69, 306, 291, 396 ], "type": "text", "content": "Kun Zhou, Beichen Zhang, Wayne Xin Zhao, and JiRong Wen. 2022. Debiased contrastive learning of unsupervised sentence representations. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 6120-6130. Association for Computational Linguistics." } ] } ], "index": 3 }, { "bbox": [ 68, 406, 260, 433 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 68, 406, 260, 433 ], "spans": [ { "bbox": [ 68, 406, 260, 433 ], "type": "text", "content": "A Prompts Used for Domain Query Generation" } ] } ], "index": 4 }, { "bbox": [ 67, 441, 291, 468 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 441, 291, 468 ], "spans": [ { "bbox": [ 67, 441, 291, 468 ], "type": "text", "content": "The LLM prompts used in the domain query generation stage are detailed as follows:" } ] } ], "index": 5 }, { "bbox": [ 73, 475, 174, 486 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 73, 475, 174, 486 ], "spans": [ { "bbox": [ 73, 475, 174, 486 ], "type": "text", "content": "Event Extraction Prompt:" } ] } ], "index": 6 }, { "bbox": [ 73, 486, 285, 506 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 73, 486, 285, 506 ], "spans": [ { "bbox": [ 73, 486, 285, 506 ], "type": "text", "content": "Given a document, please extract all the events and their associated topics and organization in the context." } ] } ], "index": 7 }, { "bbox": [ 73, 506, 285, 534 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 73, 506, 285, 534 ], "spans": [ { "bbox": [ 73, 506, 285, 534 ], "type": "text", "content": "Note: 1. The event should not contain ambiguous references, such as 'he', 'she,' and 'it', and should use complete names." } ] } ], "index": 8 }, { "bbox": [ 73, 534, 285, 564 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 73, 534, 285, 564 ], "spans": [ { "bbox": [ 73, 534, 285, 564 ], "type": "text", "content": "2. You should give at least one passage in the original text associated to the event you extract, DO NOT make up any event." } ] } ], "index": 9 }, { "bbox": [ 73, 565, 286, 585 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 73, 565, 286, 585 ], "spans": [ { "bbox": [ 73, 565, 286, 585 ], "type": "text", "content": "3. If there are multiple paragraphs associated to the extracted event, please list and number all of them." } ] } ], "index": 10 }, { "bbox": [ 73, 585, 285, 605 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 73, 585, 285, 605 ], "spans": [ { "bbox": [ 73, 585, 285, 605 ], "type": "text", "content": "4. If the event does not contain some of the arguments mentioned above, please leave it empty." } ] } ], "index": 11 }, { "bbox": [ 73, 605, 286, 645 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 73, 605, 286, 645 ], "spans": [ { "bbox": [ 73, 605, 286, 645 ], "type": "text", "content": "5. The type of Event involves fine-grained events and general events, where fine-grained events focus on specific facts and details while general events are summarizations of happened fine-grained events." } ] } ], "index": 12 }, { "bbox": [ 73, 645, 285, 664 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 73, 645, 285, 664 ], "spans": [ { "bbox": [ 73, 645, 285, 664 ], "type": "text", "content": "6. Please return the fine-grained events first, then return general events." } ] } ], "index": 13 }, { "bbox": [ 73, 665, 137, 684 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 73, 665, 137, 684 ], "spans": [ { "bbox": [ 73, 665, 137, 684 ], "type": "text", "content": "The document is: {doc}" } ] } ], "index": 14 }, { "bbox": [ 73, 684, 285, 704 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 73, 684, 285, 704 ], "spans": [ { "bbox": [ 73, 684, 285, 704 ], "type": "text", "content": "Please return the extracted event in the following format with following arguments:" } ] } ], "index": 15 }, { "bbox": [ 74, 705, 104, 714 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 74, 705, 104, 714 ], "spans": [ { "bbox": [ 74, 705, 104, 714 ], "type": "text", "content": "[Event]:" } ] } ], "index": 16 }, { "bbox": [ 74, 715, 104, 724 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 74, 715, 104, 724 ], "spans": [ { "bbox": [ 74, 715, 104, 724 ], "type": "text", "content": "[Topic]:" } ] } ], "index": 17 }, { "bbox": [ 74, 724, 167, 735 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 74, 724, 167, 735 ], "spans": [ { "bbox": [ 74, 724, 167, 735 ], "type": "text", "content": "[Original context]: 1. ...." } ] } ], "index": 18 }, { "bbox": [ 74, 735, 98, 744 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 74, 735, 98, 744 ], "spans": [ { "bbox": [ 74, 735, 98, 744 ], "type": "text", "content": "2. ......" } ] } ], "index": 19 }, { "bbox": [ 74, 753, 101, 764 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 74, 753, 101, 764 ], "spans": [ { "bbox": [ 74, 753, 101, 764 ], "type": "text", "content": "[Type]:" } ] } ], "index": 20 }, { "bbox": [ 74, 765, 157, 774 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 74, 765, 157, 774 ], "spans": [ { "bbox": [ 74, 765, 157, 774 ], "type": "text", "content": "Events you extract are:" } ] } ], "index": 21 }, { "bbox": [ 309, 72, 407, 82 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 309, 72, 407, 82 ], "spans": [ { "bbox": [ 309, 72, 407, 82 ], "type": "text", "content": "Query Synthesis Prompt:" } ] } ], "index": 22 }, { "bbox": [ 308, 82, 520, 121 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 308, 82, 520, 121 ], "spans": [ { "bbox": [ 308, 82, 520, 121 ], "type": "text", "content": "Given several events and their original source document, please ask several questions according to the infomation and give the original reference paragraph following this format:" } ] } ], "index": 23 }, { "bbox": [ 309, 122, 343, 132 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 309, 122, 343, 132 ], "spans": [ { "bbox": [ 309, 122, 343, 132 ], "type": "text", "content": "[Envent]:" } ] } ], "index": 24 }, { "bbox": [ 309, 132, 350, 142 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 309, 132, 350, 142 ], "spans": [ { "bbox": [ 309, 132, 350, 142 ], "type": "text", "content": "[Question]:" } ] } ], "index": 25 }, { "bbox": [ 308, 142, 520, 201 ], "type": "list", "angle": 0, "index": 29, "blocks": [ { "bbox": [ 308, 142, 520, 161 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 308, 142, 520, 161 ], "spans": [ { "bbox": [ 308, 142, 520, 161 ], "type": "text", "content": "Note: 1. Don't need to mention all the arguments in your question." } ] } ], "index": 26 }, { "bbox": [ 308, 162, 520, 190 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 308, 162, 520, 190 ], "spans": [ { "bbox": [ 308, 162, 520, 190 ], "type": "text", "content": "2. You can involve the original document information, but make sure that your question is about the topic of the given event." } ] } ], "index": 27 }, { "bbox": [ 308, 191, 518, 201 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 308, 191, 518, 201 ], "spans": [ { "bbox": [ 308, 191, 518, 201 ], "type": "text", "content": "3. You should ask questions separately to different events." } ] } ], "index": 28 } ], "sub_type": "text" }, { "bbox": [ 309, 202, 349, 211 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 309, 202, 349, 211 ], "spans": [ { "bbox": [ 309, 202, 349, 211 ], "type": "text", "content": "Document:" } ] } ], "index": 30 }, { "bbox": [ 309, 211, 329, 222 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 309, 211, 329, 222 ], "spans": [ { "bbox": [ 309, 211, 329, 222 ], "type": "text", "content": "{doc}" } ] } ], "index": 31 }, { "bbox": [ 309, 222, 333, 232 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 309, 222, 333, 232 ], "spans": [ { "bbox": [ 309, 222, 333, 232 ], "type": "text", "content": "Event:" } ] } ], "index": 32 }, { "bbox": [ 309, 232, 334, 241 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 309, 232, 334, 241 ], "spans": [ { "bbox": [ 309, 232, 334, 241 ], "type": "text", "content": "{event}" } ] } ], "index": 33 }, { "bbox": [ 309, 242, 435, 252 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 309, 242, 435, 252 ], "spans": [ { "bbox": [ 309, 242, 435, 252 ], "type": "text", "content": "Your question towards given event:" } ] } ], "index": 34 }, { "bbox": [ 303, 280, 493, 294 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 303, 280, 493, 294 ], "spans": [ { "bbox": [ 303, 280, 493, 294 ], "type": "text", "content": "B Case Study of Query Generation" } ] } ], "index": 35 }, { "bbox": [ 302, 310, 525, 343 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 310, 525, 343 ], "spans": [ { "bbox": [ 302, 310, 525, 343 ], "type": "text", "content": "In this section, we present a real query generation process, showcasing the input document, intermediate extracted events, and the final generated query." } ] } ], "index": 36 }, { "bbox": [ 308, 362, 462, 373 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 308, 362, 462, 373 ], "spans": [ { "bbox": [ 308, 362, 462, 373 ], "type": "text", "content": "Document Chunk from Multihop-RAG:" } ] } ], "index": 37 }, { "bbox": [ 308, 373, 520, 533 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 308, 373, 520, 533 ], "spans": [ { "bbox": [ 308, 373, 520, 533 ], "type": "text", "content": "Table of Contents Table of Contents Echo, Fire TV, and Kindle deals Apple deals TV deals Laptop deals Headphone and earbud deals Tablet deals Gaming deals Speaker deals Vacuum deals Kitchen deals Smart home deals Fitness deals Beauty tech deals Drone deals Camera deals Lego deals Gift card deals UPDATE: Nov. 27, 2023, 5:00 a.m. EST This post has been updated with all of the latest Cyber Monday deals available at Amazon. Amazon is dragging out the year's biggest shopping holiday(s) into 11 days of deals. The retail giant began its Black Friday sale in the early morning of Friday, Nov. 17 (a week ahead of schedule) and was on top of making the switch to Cyber Monday language in the wee hours of Saturday, Nov. 25. Official Cyber Monday mode, which is currently on through Monday, Nov. 27, includes both a ton of deals carried over from Black Friday plus some new ones." } ] } ], "index": 38 }, { "bbox": [ 309, 555, 411, 564 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 309, 555, 411, 564 ], "spans": [ { "bbox": [ 309, 555, 411, 564 ], "type": "text", "content": "GPT-4o Extracted Events:" } ] } ], "index": 39 }, { "bbox": [ 309, 565, 400, 574 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 309, 565, 400, 574 ], "spans": [ { "bbox": [ 309, 565, 400, 574 ], "type": "text", "content": "Fine-Grained Events" } ] } ], "index": 40 }, { "bbox": [ 309, 575, 482, 585 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 309, 575, 482, 585 ], "spans": [ { "bbox": [ 309, 575, 482, 585 ], "type": "text", "content": "1. [Event]: Amazon began its Black Friday sale." } ] } ], "index": 41 }, { "bbox": [ 309, 585, 404, 595 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 309, 585, 404, 595 ], "spans": [ { "bbox": [ 309, 585, 404, 595 ], "type": "text", "content": "[Topic]: Black Friday sale" } ] } ], "index": 42 }, { "bbox": [ 308, 595, 520, 624 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 308, 595, 520, 624 ], "spans": [ { "bbox": [ 308, 595, 520, 624 ], "type": "text", "content": "[Original context]: \"The retail giant began its Black Friday sale in the early morning of Friday, Nov. 17 (a week ahead of schedule)...\"" } ] } ], "index": 43 }, { "bbox": [ 309, 625, 385, 635 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 309, 625, 385, 635 ], "spans": [ { "bbox": [ 309, 625, 385, 635 ], "type": "text", "content": "[Type]: Fine-grained" } ] } ], "index": 44 }, { "bbox": [ 309, 635, 518, 645 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 309, 635, 518, 645 ], "spans": [ { "bbox": [ 309, 635, 518, 645 ], "type": "text", "content": "2. [Event]: Amazon switched to Cyber Monday language." } ] } ], "index": 45 }, { "bbox": [ 309, 645, 412, 655 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 309, 645, 412, 655 ], "spans": [ { "bbox": [ 309, 645, 412, 655 ], "type": "text", "content": "[Topic]: Cyber Monday sale" } ] } ], "index": 46 }, { "bbox": [ 309, 655, 520, 684 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 309, 655, 520, 684 ], "spans": [ { "bbox": [ 309, 655, 520, 684 ], "type": "text", "content": "[Original context]: \"...and was on top of making the switch to Cyber Monday language in the wee hours of Saturday, Nov. 25.\"" } ] } ], "index": 47 }, { "bbox": [ 309, 684, 384, 694 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 309, 684, 384, 694 ], "spans": [ { "bbox": [ 309, 684, 384, 694 ], "type": "text", "content": "[Type]: Fine-grained" } ] } ], "index": 48 }, { "bbox": [ 309, 694, 520, 714 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 309, 694, 520, 714 ], "spans": [ { "bbox": [ 309, 694, 520, 714 ], "type": "text", "content": "3. [Event]: Amazon's Cyber Monday mode includes deals carried over from Black Friday plus some new ones." } ] } ], "index": 49 }, { "bbox": [ 309, 714, 412, 724 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 309, 714, 412, 724 ], "spans": [ { "bbox": [ 309, 714, 412, 724 ], "type": "text", "content": "[Topic]: Cyber Monday sale" } ] } ], "index": 50 }, { "bbox": [ 309, 724, 520, 762 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 309, 724, 520, 762 ], "spans": [ { "bbox": [ 309, 724, 520, 762 ], "type": "text", "content": "[Original context]: \"Official Cyber Monday mode, which is currently on through Monday, Nov. 27, includes both a ton of deals carried over from Black Friday plus some new ones.\"" } ] } ], "index": 51 }, { "bbox": [ 309, 764, 384, 774 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 309, 764, 384, 774 ], "spans": [ { "bbox": [ 309, 764, 384, 774 ], "type": "text", "content": "[Type]: Fine-grained" } ] } ], "index": 52 } ], "discarded_blocks": [ { "bbox": [ 286, 781, 309, 791 ], "type": "page_number", "angle": 0, "lines": [ { "bbox": [ 286, 781, 309, 791 ], "spans": [ { "bbox": [ 286, 781, 309, 791 ], "type": "text", "content": "6867" } ] } ], "index": 53 } ], "page_size": [ 595, 841 ], "page_idx": 11 }, { "para_blocks": [ { "type": "table", "bbox": [ 71, 68, 289, 139 ], "blocks": [ { "bbox": [ 71, 68, 289, 139 ], "lines": [ { "bbox": [ 71, 68, 289, 139 ], "spans": [ { "bbox": [ 71, 68, 289, 139 ], "type": "table", "html": "
ModelHit@1Hit@4Hit@10MAP@10
Qwen2-1.5B33.9759.6976.5022.22
e5-mistral-7B29.4954.9975.3920.33
MiniLM17.5239.9655.7912.55
MiniLM+BMEnder32.77 (+15.25)60.18 (+20.22)78.27 (+22.48)22.40 (+9.85)
", "image_path": "2be8111975c628b3144b71d932040169fc3a9cd364379448e4e7b80455a8da0f.jpg" } ] } ], "index": 0, "angle": 0, "type": "table_body" } ], "index": 0 }, { "bbox": [ 73, 195, 197, 206 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 73, 195, 197, 206 ], "spans": [ { "bbox": [ 73, 195, 197, 206 ], "type": "text", "content": "GPT-4o-mini Generated Query:" } ] } ], "index": 2 }, { "bbox": [ 72, 206, 286, 316 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 72, 206, 286, 316 ], "spans": [ { "bbox": [ 72, 206, 286, 316 ], "type": "text", "content": "1. [Event]: Amazon began its Black Friday sale. [Question]: How did Amazon's early start to the Black Friday sale impact customer engagement compared to previous years? \n2. [Event]: Amazon switched to Cyber Monday language. [Question]: What strategies did Amazon employ to transition from Black Friday to Cyber Monday promotions? \n3. [Event]: Amazon's Cyber Monday mode includes deals carried over from Black Friday plus some new ones. [Question]: What types of new deals can customers expect during Amazon's Cyber Monday sale compared to those from Black Friday?" } ] } ], "index": 3 }, { "bbox": [ 68, 328, 217, 356 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 68, 328, 217, 356 ], "spans": [ { "bbox": [ 68, 328, 217, 356 ], "type": "text", "content": "C Experimental Details for Generalization Settings" } ] } ], "index": 4 }, { "bbox": [ 67, 364, 291, 485 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 364, 291, 485 ], "spans": [ { "bbox": [ 67, 364, 291, 485 ], "type": "text", "content": "Setting 1: We use the all-MiniLM-L6-v2 model, which has only 22.7M parameters. We fine-tune it on the Multihop-RAG dataset using a learning rate of 2e-5 for 500 steps, with the same sampling strategy used in our main experiments for this dataset. The full fine-tuning process required only 24 minutes on a single NVIDIA 3090 GPU, with a peak memory usage of 4GB. The results are presented in the Table 6." } ] } ], "index": 5 }, { "bbox": [ 67, 495, 291, 589 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 495, 291, 589 ], "spans": [ { "bbox": [ 67, 495, 291, 589 ], "type": "text", "content": "Setting 2: We conducted an additional experiment using ListMLE (Xia et al., 2008) on the Qwen2-1.5B model and the Multihop-RAG dataset, under the same settings as our main experiment. This setup is compared against ListNet-based training used in the our main experiment. The results are presented in Table 7." } ] } ], "index": 6 }, { "bbox": [ 67, 598, 291, 774 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 598, 291, 774 ], "spans": [ { "bbox": [ 67, 598, 291, 774 ], "type": "text", "content": "Setting 3: We conducted an additional experiment to evaluate the adapted embeddings on a semantic similarity task. Specifically, we used the model fine-tuned on the Finance-RAG dataset and evaluated it on the FinSTS dataset (Liu et al., 2024), a well-annotated benchmark designed to detect subtle semantic shifts in financial narratives. Since both datasets are based on financial reports, FinSTS serves as a \"private evaluation set\" in this context. In this evaluation, we adopted a last-token pooling configuration and used the Cosine Spearman Correlation as the evaluation metric. The results are presented in Table 8." } ] } ], "index": 7 }, { "type": "table", "bbox": [ 328, 69, 503, 110 ], "blocks": [ { "bbox": [ 67, 148, 289, 171 ], "lines": [ { "bbox": [ 67, 148, 289, 171 ], "spans": [ { "bbox": [ 67, 148, 289, 171 ], "type": "text", "content": "Table 6: Retrieval Performance of Different Models on MultihopRAG." } ] } ], "index": 1, "angle": 0, "type": "table_caption" }, { "bbox": [ 328, 69, 503, 110 ], "lines": [ { "bbox": [ 328, 69, 503, 110 ], "spans": [ { "bbox": [ 328, 69, 503, 110 ], "type": "table", "html": "
MethodHit@1Hit@4Hit@10MAP@10
ListNet40.5868.3440.5826.54
ListMLE39.8767.9883.1026.29
", "image_path": "143322dedaa3e67a98653fa89f55841e0b34c6db5cef2229ab7fa6c43331c32e.jpg" } ] } ], "index": 8, "angle": 0, "type": "table_body" } ], "index": 8 }, { "type": "table", "bbox": [ 305, 153, 524, 191 ], "blocks": [ { "bbox": [ 302, 117, 524, 142 ], "lines": [ { "bbox": [ 302, 117, 524, 142 ], "spans": [ { "bbox": [ 302, 117, 524, 142 ], "type": "text", "content": "Table 7: ListMLE vs. ListNet under identical training settings." } ] } ], "index": 9, "angle": 0, "type": "table_caption" }, { "bbox": [ 305, 153, 524, 191 ], "lines": [ { "bbox": [ 305, 153, 524, 191 ], "spans": [ { "bbox": [ 305, 153, 524, 191 ], "type": "table", "html": "
Modelwithout BMEdbwith BMEdbImprovement
Qwen2-1.5B0.25660.2727+0.0161
e5-mistral-7B0.26780.3024+0.0346
", "image_path": "6cd370e09132085d00d55246fb1bff557d36a3ca2a8e96c0fe6da3c2221dd136.jpg" } ] } ], "index": 10, "angle": 0, "type": "table_body" } ], "index": 10 }, { "bbox": [ 302, 199, 524, 223 ], "lines": [ { "bbox": [ 302, 199, 524, 223 ], "spans": [ { "bbox": [ 302, 199, 524, 223 ], "type": "text", "content": "Table 8: Evaluation on FinSTS with Cosine Spearman correlation." } ] } ], "index": 11, "angle": 0, "type": "text" }, { "bbox": [ 303, 244, 404, 259 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 303, 244, 404, 259 ], "spans": [ { "bbox": [ 303, 244, 404, 259 ], "type": "text", "content": "D Ablation Study" } ] } ], "index": 12 }, { "bbox": [ 302, 266, 503, 291 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 302, 266, 503, 291 ], "spans": [ { "bbox": [ 302, 266, 503, 291 ], "type": "text", "content": "D.1 Ablation Study of Query Generation Module" } ] } ], "index": 13 }, { "bbox": [ 301, 297, 526, 527 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 301, 297, 526, 527 ], "spans": [ { "bbox": [ 301, 297, 526, 527 ], "type": "text", "content": "We conduct experiments to investigate the impact of the number of synthetic queries used for finetuning. Specifically, we compare three settings: (1) using the full set of synthetic queries, (2) using a randomly sampled " }, { "bbox": [ 301, 297, 526, 527 ], "type": "inline_equation", "content": "50\\%" }, { "bbox": [ 301, 297, 526, 527 ], "type": "text", "content": " subset, and (3) using a randomly sampled " }, { "bbox": [ 301, 297, 526, 527 ], "type": "inline_equation", "content": "25\\%" }, { "bbox": [ 301, 297, 526, 527 ], "type": "text", "content": " subset. To control for the total number of training samples, we change the number of listwise samples generated per query. Specifically, we increase the number of sampled ranking lists per query accordingly when using fewer queries, ensuring the overall amount of training data remains constant. All experiments are conducted on the Multihop-RAG dataset using the Qwen2-1.5B model. All other settings are kept fixed, including the sampling strategy, number of training steps (1,000), and the temperature (1.0) used in listwise fine-tuning." } ] } ], "index": 14 }, { "bbox": [ 302, 528, 525, 623 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 528, 525, 623 ], "spans": [ { "bbox": [ 302, 528, 525, 623 ], "type": "text", "content": "As shown in Tabel 9, no significant performance difference is observed across the three settings, suggesting that the number of synthetic queries has limited impact on the model's performance. This indicates that BMEmb can compensate for fewer queries by generating multiple listwise samples per query, thereby maintaining training signal quality." } ] } ], "index": 15 }, { "bbox": [ 302, 631, 507, 656 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 302, 631, 507, 656 ], "spans": [ { "bbox": [ 302, 631, 507, 656 ], "type": "text", "content": "D.2 Ablation Study of Relevant Sampling Module" } ] } ], "index": 16 }, { "bbox": [ 302, 662, 526, 716 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 662, 526, 716 ], "spans": [ { "bbox": [ 302, 662, 526, 716 ], "type": "text", "content": "We conduct three sets of experiments on MultihopRAG and Qwen model while controlling different variables, investigating four key factors according to our pipeline:" } ] } ], "index": 17 }, { "bbox": [ 316, 725, 525, 775 ], "type": "list", "angle": 0, "index": 20, "blocks": [ { "bbox": [ 316, 725, 525, 751 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 316, 725, 525, 751 ], "spans": [ { "bbox": [ 316, 725, 525, 751 ], "type": "text", "content": "- selection of " }, { "bbox": [ 316, 725, 525, 751 ], "type": "inline_equation", "content": "k" }, { "bbox": [ 316, 725, 525, 751 ], "type": "text", "content": ", we explore values of " }, { "bbox": [ 316, 725, 525, 751 ], "type": "inline_equation", "content": "k" }, { "bbox": [ 316, 725, 525, 751 ], "type": "text", "content": " at 200, 500, and 1000;" } ] } ], "index": 18 }, { "bbox": [ 316, 761, 524, 775 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 316, 761, 524, 775 ], "spans": [ { "bbox": [ 316, 761, 524, 775 ], "type": "text", "content": "- selection of " }, { "bbox": [ 316, 761, 524, 775 ], "type": "inline_equation", "content": "m" }, { "bbox": [ 316, 761, 524, 775 ], "type": "text", "content": ", we examine " }, { "bbox": [ 316, 761, 524, 775 ], "type": "inline_equation", "content": "m" }, { "bbox": [ 316, 761, 524, 775 ], "type": "text", "content": " values ranging" } ] } ], "index": 19 } ], "sub_type": "text" } ], "discarded_blocks": [ { "bbox": [ 286, 781, 309, 791 ], "type": "page_number", "angle": 0, "lines": [ { "bbox": [ 286, 781, 309, 791 ], "spans": [ { "bbox": [ 286, 781, 309, 791 ], "type": "text", "content": "6868" } ] } ], "index": 21 } ], "page_size": [ 595, 841 ], "page_idx": 12 }, { "para_blocks": [ { "type": "table", "bbox": [ 117, 68, 478, 128 ], "blocks": [ { "bbox": [ 117, 68, 478, 128 ], "lines": [ { "bbox": [ 117, 68, 478, 128 ], "spans": [ { "bbox": [ 117, 68, 478, 128 ], "type": "table", "html": "
SettingSamples per QueryTotal SamplesHit@1Hit@4Hit@10MAP@10
full set15,97241.0269.3684.7926.96
subset(50%)25,97239.9168.4384.2126.30
subset(25%)45,97240.3168.0384.0826.48
", "image_path": "03ed060cedc80951012c612bec0c6ca82753311bb359930e5ddb15af48d0ce64.jpg" } ] } ], "index": 0, "angle": 0, "type": "table_body" } ], "index": 0 }, { "bbox": [ 186, 136, 405, 148 ], "lines": [ { "bbox": [ 186, 136, 405, 148 ], "spans": [ { "bbox": [ 186, 136, 405, 148 ], "type": "text", "content": "Table 9: Ablation study of Query Generation Module." } ] } ], "index": 1, "angle": 0, "type": "text" }, { "bbox": [ 89, 169, 151, 182 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 89, 169, 151, 182 ], "spans": [ { "bbox": [ 89, 169, 151, 182 ], "type": "text", "content": "from 6 to 10;" } ] } ], "index": 2 }, { "bbox": [ 81, 198, 291, 389 ], "type": "list", "angle": 0, "index": 5, "blocks": [ { "bbox": [ 81, 198, 291, 334 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 81, 198, 291, 334 ], "spans": [ { "bbox": [ 81, 198, 291, 334 ], "type": "text", "content": "- sampling strategy, compared fine-to-coarse and uniform approaches, fixing the first partition from 0 to 3 for an informative positive sample, while dividing the remaining partitions based on the chosen strategy. Specifically, when using the fine-to-coarse strategy, for a given " }, { "bbox": [ 81, 198, 291, 334 ], "type": "inline_equation", "content": "k" }, { "bbox": [ 81, 198, 291, 334 ], "type": "text", "content": " and " }, { "bbox": [ 81, 198, 291, 334 ], "type": "inline_equation", "content": "m" }, { "bbox": [ 81, 198, 291, 334 ], "type": "text", "content": ", the length of the next interval is twice the length of the previous interval. This can be represented by the formula: " }, { "bbox": [ 81, 198, 291, 334 ], "type": "inline_equation", "content": "L(\\mathcal{P}_{i + 1}) = 2L(\\mathcal{P}_i)" }, { "bbox": [ 81, 198, 291, 334 ], "type": "text", "content": ";" } ] } ], "index": 3 }, { "bbox": [ 81, 349, 291, 389 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 81, 349, 291, 389 ], "spans": [ { "bbox": [ 81, 349, 291, 389 ], "type": "text", "content": "- hyperparameter " }, { "bbox": [ 81, 349, 291, 389 ], "type": "inline_equation", "content": "\\alpha" }, { "bbox": [ 81, 349, 291, 389 ], "type": "text", "content": ", for convenience, we work with its reciprocal, " }, { "bbox": [ 81, 349, 291, 389 ], "type": "inline_equation", "content": "1 / \\alpha" }, { "bbox": [ 81, 349, 291, 389 ], "type": "text", "content": ", with values of 0.1, 0.2, 0.5, 0.7, and 1.0." } ] } ], "index": 4 } ], "sub_type": "text" }, { "bbox": [ 79, 404, 267, 417 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 79, 404, 267, 417 ], "spans": [ { "bbox": [ 79, 404, 267, 417 ], "type": "text", "content": "Our experiments are structured as follows:" } ] } ], "index": 6 }, { "bbox": [ 77, 433, 291, 584 ], "type": "list", "angle": 0, "index": 10, "blocks": [ { "bbox": [ 77, 433, 290, 473 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 77, 433, 290, 473 ], "spans": [ { "bbox": [ 77, 433, 290, 473 ], "type": "text", "content": "1. We fix temperature " }, { "bbox": [ 77, 433, 290, 473 ], "type": "inline_equation", "content": "= 1" }, { "bbox": [ 77, 433, 290, 473 ], "type": "text", "content": " and " }, { "bbox": [ 77, 433, 290, 473 ], "type": "inline_equation", "content": "k = 1000" }, { "bbox": [ 77, 433, 290, 473 ], "type": "text", "content": ", and conduct experiments with different values of " }, { "bbox": [ 77, 433, 290, 473 ], "type": "inline_equation", "content": "m" }, { "bbox": [ 77, 433, 290, 473 ], "type": "text", "content": " and sampling strategies." } ] } ], "index": 7 }, { "bbox": [ 77, 488, 291, 528 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 77, 488, 291, 528 ], "spans": [ { "bbox": [ 77, 488, 291, 528 ], "type": "text", "content": "2. We fix temperature " }, { "bbox": [ 77, 488, 291, 528 ], "type": "inline_equation", "content": "= 1" }, { "bbox": [ 77, 488, 291, 528 ], "type": "text", "content": ", " }, { "bbox": [ 77, 488, 291, 528 ], "type": "inline_equation", "content": "m = 10" }, { "bbox": [ 77, 488, 291, 528 ], "type": "text", "content": ", and the fine-to-coarse strategy, then investigate different values of " }, { "bbox": [ 77, 488, 291, 528 ], "type": "inline_equation", "content": "k" }, { "bbox": [ 77, 488, 291, 528 ], "type": "text", "content": "." } ] } ], "index": 8 }, { "bbox": [ 77, 544, 290, 584 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 77, 544, 290, 584 ], "spans": [ { "bbox": [ 77, 544, 290, 584 ], "type": "text", "content": "3. We fix " }, { "bbox": [ 77, 544, 290, 584 ], "type": "inline_equation", "content": "k = 500" }, { "bbox": [ 77, 544, 290, 584 ], "type": "text", "content": ", " }, { "bbox": [ 77, 544, 290, 584 ], "type": "inline_equation", "content": "m = 10" }, { "bbox": [ 77, 544, 290, 584 ], "type": "text", "content": ", and the fine-to-coarse strategy, then examine the effect of varying temperature." } ] } ], "index": 9 } ], "sub_type": "text" }, { "bbox": [ 67, 599, 292, 775 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 599, 292, 775 ], "spans": [ { "bbox": [ 67, 599, 292, 775 ], "type": "text", "content": "Our ablation experiment results in Table 10 demonstrate that, fine-tuned embedding model with lower alignment and higher uniformity tend to achieve better result on retrieval task. We observe a strong correlation between retrieval performance and these two properties. Specifically, embedding models with better alignment tend to achieve superior retrieval results. Moreover, when alignment is similar, models with larger uniformity exhibit better retrieval performance. This suggests that we can leverage our strategy to adjust alignment and uniformity, ultimately optimizing retrieval performance." } ] } ], "index": 11 }, { "bbox": [ 303, 169, 525, 196 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 303, 169, 525, 196 ], "spans": [ { "bbox": [ 303, 169, 525, 196 ], "type": "text", "content": "E Alignment and Uniformity: Details and Discussion" } ] } ], "index": 12 }, { "bbox": [ 302, 204, 526, 339 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 204, 526, 339 ], "spans": [ { "bbox": [ 302, 204, 526, 339 ], "type": "text", "content": "In the work of Wang and Isola (2020), Alignment, which measures how well similar data points are positioned in the embedding space, is quantified by the mean Euclidean distance between the embeddings of all positive pairs. Uniformity, which reflects how well the data points are distributed across the embedding space, is quantified using the Gaussian potential kernel, capturing the pairwise similarity across all data points in the distribution, they are denoted as follows:" } ] } ], "index": 13 }, { "bbox": [ 309, 349, 459, 362 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 309, 349, 459, 362 ], "spans": [ { "bbox": [ 309, 349, 459, 362 ], "type": "text", "content": "Alignment " }, { "bbox": [ 309, 349, 459, 362 ], "type": "inline_equation", "content": "= \\mathbb{E}_{x,y\\in pos}[\\| e(x) - e(y)\\| _2^2 ]" } ] } ], "index": 14 }, { "bbox": [ 309, 364, 518, 378 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 309, 364, 518, 378 ], "spans": [ { "bbox": [ 309, 364, 518, 378 ], "type": "text", "content": "Uniformity " }, { "bbox": [ 309, 364, 518, 378 ], "type": "inline_equation", "content": "= \\log \\mathbb{E}_{x,y\\in p_{data}}[exp(-2\\| h(x) - h(y)\\| _2^2)]" } ] } ], "index": 15 }, { "bbox": [ 302, 385, 526, 507 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 385, 526, 507 ], "spans": [ { "bbox": [ 302, 385, 526, 507 ], "type": "text", "content": "where " }, { "bbox": [ 302, 385, 526, 507 ], "type": "inline_equation", "content": "x, y \\in pos" }, { "bbox": [ 302, 385, 526, 507 ], "type": "text", "content": " represents the positive pairs in the dataset, and " }, { "bbox": [ 302, 385, 526, 507 ], "type": "inline_equation", "content": "p_{data}" }, { "bbox": [ 302, 385, 526, 507 ], "type": "text", "content": " is the data distribution of all data points, " }, { "bbox": [ 302, 385, 526, 507 ], "type": "inline_equation", "content": "e(\\cdot)" }, { "bbox": [ 302, 385, 526, 507 ], "type": "text", "content": " is the embedding model that maps input data points to their corresponding embeddings in a high-dimensional space. In our experiments, " }, { "bbox": [ 302, 385, 526, 507 ], "type": "inline_equation", "content": "x, y \\in pos" }, { "bbox": [ 302, 385, 526, 507 ], "type": "text", "content": " refer to the question and its corresponding evidence chunk, while we randomly sample chunks from each document, forming a set of " }, { "bbox": [ 302, 385, 526, 507 ], "type": "inline_equation", "content": "p_{data}" }, { "bbox": [ 302, 385, 526, 507 ], "type": "text", "content": " to compute uniformity." } ] } ], "index": 16 }, { "bbox": [ 302, 507, 527, 697 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 507, 527, 697 ], "spans": [ { "bbox": [ 302, 507, 527, 697 ], "type": "text", "content": "Since fine-tuning can further amend the model's alignment (Gao et al., 2021), making it difficult to compare across different models, we introduce a scaling factor to address this. A model with high alignment does not necessarily perform worse in retrieval than one with low alignment. If a high-alignment model also ensures that negative samples are more dispersed relative to positive ones, it can still achieve strong retrieval performance. Considering this, we define the distance between the query and its nearest embedding in the database as a scaling factor for alignment. In the following experiments, we use the normalized version of alignment, which denotes as follows:" } ] } ], "index": 17 }, { "bbox": [ 320, 712, 508, 740 ], "type": "interline_equation", "angle": 0, "lines": [ { "bbox": [ 320, 712, 508, 740 ], "spans": [ { "bbox": [ 320, 712, 508, 740 ], "type": "interline_equation", "content": "\\mathrm {A l g n m e n t} _ {n o r m} = \\mathbb {E} _ {x, y \\in p o s} [ \\frac {\\| e (x) - e (y) \\| _ {2} ^ {2}}{\\| e (x) - e (y _ {\\mathrm {n e a r e s t}}) \\| _ {2} ^ {2}} ]", "image_path": "e2e79f39cb1298e5e05dad44fea3e3527699388bab22315ad98d1b30dabeb7ae.jpg" } ] } ], "index": 18 }, { "bbox": [ 302, 748, 526, 776 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 748, 526, 776 ], "spans": [ { "bbox": [ 302, 748, 526, 776 ], "type": "text", "content": ", where " }, { "bbox": [ 302, 748, 526, 776 ], "type": "inline_equation", "content": "e(y_{\\mathrm{nearest}})" }, { "bbox": [ 302, 748, 526, 776 ], "type": "text", "content": " refers to the closest embedding in the database to the question embedding " }, { "bbox": [ 302, 748, 526, 776 ], "type": "inline_equation", "content": "e(x)" }, { "bbox": [ 302, 748, 526, 776 ], "type": "text", "content": "." } ] } ], "index": 19 } ], "discarded_blocks": [ { "bbox": [ 286, 780, 309, 791 ], "type": "page_number", "angle": 0, "lines": [ { "bbox": [ 286, 780, 309, 791 ], "spans": [ { "bbox": [ 286, 780, 309, 791 ], "type": "text", "content": "6869" } ] } ], "index": 20 } ], "page_size": [ 595, 841 ], "page_idx": 13 }, { "para_blocks": [ { "type": "table", "bbox": [ 71, 68, 523, 383 ], "blocks": [ { "bbox": [ 71, 68, 523, 383 ], "lines": [ { "bbox": [ 71, 68, 523, 383 ], "spans": [ { "bbox": [ 71, 68, 523, 383 ], "type": "table", "html": "
MethodAlignmentUniformityHit@1Hit@4Hit@10MAP@10
Base1.24222.762433.9759.6976.5022.22
m=6 k=1000 fine-to-coarse1.20313.125838.9467.4582.4425.94
m=7 k=1000 fine-to-coarse1.19533.190741.0269.0083.9926.76
m=8 k=1000 fine-to-coarse1.19533.327639.3868.9183.3326.29
m=9 k=1000 fine-to-coarse1.20313.326640.5868.3483.0626.54
m=10 k=1000 fine-to-coarse1.20313.326740.0468.4383.5526.43
m=6 k=1000 uniform1.27343.601236.9864.7980.2724.41
m=7 k=1000 uniform1.26563.586036.7665.1981.3724.79
m=8 k=1000 uniform1.25783.627638.6767.4982.3525.61
m=9 k=1000 uniform1.25783.622238.1865.9081.4625.24
m=10 k=1000 uniform1.27343.626536.5064.2680.7124.39
k=1000 uniform m=101.27343.626536.5064.2680.7124.39
k=500 uniform m=101.25783.630336.7665.4581.4624.72
k=200 uniform m=101.24223.645237.6966.3982.9725.23
k=1000 fine-to-coarse m=101.20313.326740.0468.4383.5526.43
k=500 fine-to-coarse m=101.19533.367540.7168.7483.5026.67
k=200 fine-to-coarse m=101.19533.389638.8568.6583.1026.11
1/α=0.1 k=1000 fine-to-coarse m=101.19532.177435.4863.0278.1423.96
1/α=0.2 k=1000 fine-to-coarse m=101.18752.656037.8366.4381.4625.47
1/α=0.5 k=1000 fine-to-coarse m=101.18753.284940.0967.6382.8826.34
1/α=0.7 k=1000 fine-to-coarse m=101.19533.341139.9668.2983.1026.45
1/α=1.0 k=1000 fine-to-coarse m=101.19533.367540.7168.7483.5026.67
", "image_path": "fc29bd10a7fa9af7e8dcd60c228fee035f43ab11add2510b66a3dc95ebe7379a.jpg" } ] } ], "index": 0, "angle": 0, "type": "table_body" } ], "index": 0 }, { "bbox": [ 182, 391, 409, 402 ], "lines": [ { "bbox": [ 182, 391, 409, 402 ], "spans": [ { "bbox": [ 182, 391, 409, 402 ], "type": "text", "content": "Table 10: Ablation study of Relevant Sampling Module." } ] } ], "index": 1, "angle": 0, "type": "text" }, { "bbox": [ 67, 424, 290, 492 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 424, 290, 492 ], "spans": [ { "bbox": [ 67, 424, 290, 492 ], "type": "text", "content": "Finally, the original uniformity is a negative value, in our experiments, we report the absolute value of uniformity. This makes comparison and analysis easier, and a larger absolute value indicates that the embedding model distribution is more uniform." } ] } ], "index": 2 }, { "bbox": [ 68, 512, 285, 526 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 68, 512, 285, 526 ], "spans": [ { "bbox": [ 68, 512, 285, 526 ], "type": "text", "content": "F Prompts Used for Query Perturbation" } ] } ], "index": 3 }, { "bbox": [ 67, 540, 289, 567 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 540, 289, 567 ], "spans": [ { "bbox": [ 67, 540, 289, 567 ], "type": "text", "content": "The LLM prompts used in the keywords masking experiments are detailed as follows:" } ] } ], "index": 4 }, { "bbox": [ 72, 584, 205, 594 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 72, 584, 205, 594 ], "spans": [ { "bbox": [ 72, 584, 205, 594 ], "type": "text", "content": "Prompt for Extracting Keywords:" } ] } ], "index": 5 }, { "bbox": [ 72, 595, 285, 624 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 72, 595, 285, 624 ], "spans": [ { "bbox": [ 72, 595, 285, 624 ], "type": "text", "content": "Given a query and a paragraph including the answer of the query, please extract all the common keywords that query and paragraph both have:" } ] } ], "index": 6 }, { "bbox": [ 73, 624, 95, 633 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 73, 624, 95, 633 ], "spans": [ { "bbox": [ 73, 624, 95, 633 ], "type": "text", "content": "Note:" } ] } ], "index": 7 }, { "bbox": [ 73, 634, 285, 724 ], "type": "list", "angle": 0, "index": 11, "blocks": [ { "bbox": [ 73, 634, 285, 674 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 73, 634, 285, 674 ], "spans": [ { "bbox": [ 73, 634, 285, 674 ], "type": "text", "content": "1. The definition of keywords is: words in the query and paragraph that are particularly distinctive and related to the main topic. Less important pronouns or frequently occurring words do not fall into this category." } ] } ], "index": 8 }, { "bbox": [ 73, 674, 285, 694 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 73, 674, 285, 694 ], "spans": [ { "bbox": [ 73, 674, 285, 694 ], "type": "text", "content": "2. The words you extract must appear in both the query and the paragraph." } ] } ], "index": 9 }, { "bbox": [ 73, 694, 284, 724 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 73, 694, 284, 724 ], "spans": [ { "bbox": [ 73, 694, 284, 724 ], "type": "text", "content": "3. Do not output other format, just list all the words as follows: investigation, Eastwood, Filing" } ] } ], "index": 10 } ], "sub_type": "text" }, { "bbox": [ 73, 724, 100, 734 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 73, 724, 100, 734 ], "spans": [ { "bbox": [ 73, 724, 100, 734 ], "type": "text", "content": "Query:" } ] } ], "index": 12 }, { "bbox": [ 73, 734, 101, 744 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 73, 734, 101, 744 ], "spans": [ { "bbox": [ 73, 734, 101, 744 ], "type": "text", "content": "{query}" } ] } ], "index": 13 }, { "bbox": [ 73, 745, 114, 754 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 73, 745, 114, 754 ], "spans": [ { "bbox": [ 73, 745, 114, 754 ], "type": "text", "content": "Paragraph:" } ] } ], "index": 14 }, { "bbox": [ 73, 754, 117, 764 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 73, 754, 117, 764 ], "spans": [ { "bbox": [ 73, 754, 117, 764 ], "type": "text", "content": "{paragraph}" } ] } ], "index": 15 }, { "bbox": [ 73, 764, 111, 774 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 73, 764, 111, 774 ], "spans": [ { "bbox": [ 73, 764, 111, 774 ], "type": "text", "content": "keywords:" } ] } ], "index": 16 }, { "bbox": [ 308, 425, 443, 435 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 308, 425, 443, 435 ], "spans": [ { "bbox": [ 308, 425, 443, 435 ], "type": "text", "content": "Prompt for Generating Synonyms:" } ] } ], "index": 17 }, { "bbox": [ 308, 435, 520, 465 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 308, 435, 520, 465 ], "spans": [ { "bbox": [ 308, 435, 520, 465 ], "type": "text", "content": "Given a query and a set of its keywords, generate substituted words or phrases for these keywords that preserve the original semantic meaning of the query." } ] } ], "index": 18 }, { "bbox": [ 309, 465, 330, 474 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 309, 465, 330, 474 ], "spans": [ { "bbox": [ 309, 465, 330, 474 ], "type": "text", "content": "Note:" } ] } ], "index": 19 }, { "bbox": [ 308, 475, 520, 545 ], "type": "list", "angle": 0, "index": 23, "blocks": [ { "bbox": [ 308, 475, 520, 505 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 308, 475, 520, 505 ], "spans": [ { "bbox": [ 308, 475, 520, 505 ], "type": "text", "content": "1. Ensure the number of keywords remains unchanged, with one substitution for each keyword. Maintain the query's intent, context, and grammatical correctness." } ] } ], "index": 20 }, { "bbox": [ 308, 505, 520, 525 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 308, 505, 520, 525 ], "spans": [ { "bbox": [ 308, 505, 520, 525 ], "type": "text", "content": "2. Avoid altering the overall structure and purpose of the query." } ] } ], "index": 21 }, { "bbox": [ 308, 525, 520, 545 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 308, 525, 520, 545 ], "spans": [ { "bbox": [ 308, 525, 520, 545 ], "type": "text", "content": "3. Return the substituted keywords in the same format with Keywords like: investigation, Eastwood, Filing" } ] } ], "index": 22 } ], "sub_type": "text" }, { "bbox": [ 308, 545, 334, 554 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 308, 545, 334, 554 ], "spans": [ { "bbox": [ 308, 545, 334, 554 ], "type": "text", "content": "Query:" } ] } ], "index": 24 }, { "bbox": [ 308, 555, 336, 565 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 308, 555, 336, 565 ], "spans": [ { "bbox": [ 308, 555, 336, 565 ], "type": "text", "content": "{query}" } ] } ], "index": 25 }, { "bbox": [ 308, 565, 349, 574 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 308, 565, 349, 574 ], "spans": [ { "bbox": [ 308, 565, 349, 574 ], "type": "text", "content": "Keywords:" } ] } ], "index": 26 }, { "bbox": [ 308, 574, 351, 585 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 308, 574, 351, 585 ], "spans": [ { "bbox": [ 308, 574, 351, 585 ], "type": "text", "content": "{keywords}" } ] } ], "index": 27 }, { "bbox": [ 308, 585, 407, 595 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 308, 585, 407, 595 ], "spans": [ { "bbox": [ 308, 585, 407, 595 ], "type": "text", "content": "Your substituted keywords:" } ] } ], "index": 28 } ], "discarded_blocks": [ { "bbox": [ 286, 781, 309, 791 ], "type": "page_number", "angle": 0, "lines": [ { "bbox": [ 286, 781, 309, 791 ], "spans": [ { "bbox": [ 286, 781, 309, 791 ], "type": "text", "content": "6870" } ] } ], "index": 29 } ], "page_size": [ 595, 841 ], "page_idx": 14 } ], "_backend": "vlm", "_version_name": "2.6.4" }